Audio description is sort of like an audio tour that you might have taken at a museum, only a typical audio tour often assumes a connection can be made by the listener to the visual element being described, such as “Look at this painting, and I’ll tell you more about it.” Audio description typically has no visual reference, and its baseline user is blind. Those with visual impairments, or print dyslexia or audio inclinations can benefit from additional visual perceptions, but if the audio description doesn’t work for the blind user, then it doesn’t do its most fundamental job. The museum audio tours of the 1950s could be considered a precursor to audio description. But as technologies have improved, and a web tool that produces audio-description mobile apps – like The UniDescription Project – becomes possible, new opportunities emerge to make the world a more accessible place for all. Audio description experiments began in the 1970s, around the same time as the television debut of closed captioning. It took another decade for the idea to gain traction when entertainment producers in film, theater, and television began looking for more ways to grow their audiences. The development has been slow since. The 2016 Summer Olympics in Rio made history, for example, by being the first of those athletic games to include audio description. With live action, though, audio description has to be crammed into the quiet spaces, between dialogue and other scene-setting sounds. With static media, such as brochures, audio-description designers have more options and openings for the discovery of better ways to create these contextual layers. The UniDescription Project, in turn, is as much, or more, a research project as it is a tool-development effort, and its goals are to bring more audio description to all parts of the world.