This is the fourth in a series of articles on the history of blindness assistive technology, as gathered from interviews with 25 of the giants who created that history. In the first and second articles (in the July and September 2006 issues of AccessWorld), I described the project and summarized the interviews. In the third article (in the November 2006 issue of AccessWorld), I began a tour of the history of modern blindness assistive technologies and included selected commentaries from the legends and pioneers whom I interviewed. To provide a fuller understanding of the context in which blindness technologies were developed, the forces faced by the legends and pioneers who created and marketed these technologies, and how this particular niche industry has managed to coexist within the mainstream, I continue the tour in this article with a discussion of the computer itself. I finish with some speculations about the future of blindness assistive technology.

The Computer

Much of the information for this section was derived from the book, Computer: A History of the Information Machine, by Martin Campbell-Kelly and William Astray, published by Basic Books in 1996 and available at <www.amazon.com> and <barnesandnoble.com>.

Initially, the term computer referred to a person doing calculations by hand. These human "computers" meticulously compiled mathematical tables and, especially in the maritime nations of the 16th century, navigational charts. In the 1700s and 1800s, most offices contained a few clerks who wrote with quill and ink. Recall the Dickens character, Bob Cratchit, and you have the picture. The typewriter was still in the future. Then Charles Babbage (1791-1871), who could arguably be called the father of the modern computing machine, used the first computing "architecture" to create the "difference" and "analytical" engines. Later in the century, Herman Hollerith (1859-1929) developed a mechanical system for processing census data.

The United States dominated the large computer industry after World War II and, today, it dominates the personal computer (PC) industry. There is an unbroken line of descent from the giant office-machine firms of the turn of the 20th century, such as Remington Rand, National Cash Register (NCR), the Burroughs Adding Machine Company, and International Business Machines (IBM), to the computer makers of today. There is also an unbroken line in the way 19th-century offices functioned and today's computing capabilities. The typewriter QWERTY keyboard is found on almost all computers. The major software programs perform the same functions today as a century ago (namely, preparing documents in the form of word processing, information management by database programs, and financial analysis and accounting by modern spreadsheet software).

Developments leading to the modern computer began in earnest at the outbreak of World War II. Two military problems—the construction of tables to aid in targeting artillery and the processing of radar signals—led to the ENIAC (Electronic Numerical Integrator and Computer) in 1943. Over the next 30 years, computers grew more powerful and smaller.

The 1970s witnessed the widespread use of newly perfected integrated circuits, culminating first in the emergence of the minicomputer and finally the microcomputer. It was also the decade of the so-called calculator wars, which took advantage of the latest innovations in these small yet powerful chips. (The proliferation of small calculators, combined with evolving techniques of speech synthesis, enabled pioneers like Jim Bliss to develop the talking calculator.)

The Altair 8800 was the first microprocessor-based computer. It was sold as a kit to electronics hobbyists for $397. One of those hobbyists was a young "nighttime" programmer named Bill Gates. With his childhood friend, Paul Allen, Gates figured out that the only way the average person would ever benefit from the promise of these small machines would be through the enabling power of software that would eliminate the need for everyone to be a programmer like him. Together, Gates and Allen produced Altaire BASIC. Because most of the subsystems that were required to create a PC—keyboards, screens, disk drives, and printers—already existed, in the two-year period from 1975 to 1977, the microcomputer world was transformed from the province of the few to that of the everyday user. The Apple II, launched in 1977, established the paradigm of the PC: a central processing unit equipped with a keyboard and screen and floppy disk drives for program and data storage. Another pair of young and talented enthusiasts, Steve Jobs and Steve Wozniak, built their first batch of Apple II computers in a garage in Cupertino, California. (Jerry Kuns, interviewed for this series and a blind technology buff, spent time with Jobs and Wozniak during this period.) In 1980, IBM got into the act with its IBM Personal Computer. Almost immediately, companies like Compaq came out with "clones."

The PC software market burgeoned with games and educational, word-processing, database, and spreadsheet programs. A high school student, Doug Geoffray, working for Computer Aids Corporation, helped his employer Bill Grimm take one of these programs, VisiCalc, written for the Apple II, and turn it into an early talking program, Calc-Talk.

In 1982, the Apple IIe began to replace the Apple II+. A new Super Serial Card inside the IIe; rapid changes in speech synthesizers, such as the Echo II; and continuous software revisions made it necessary to communicate news and tips to assistive technology users who were blind quickly. It was around this time that David Holladay (interviewed for this series), having developed speech and braille editing software for the Apple II+ and IIe, started the Raised Dot newsletter, one of the first publications that was specifically designed to give users who were blind the kinds of practical information that PC-oriented newsletters and magazines were providing to their sighted audiences.

Software, Software, Software

The recent history of computers has been dominated not so much by hardware as by software. Through the 1980s, disk operating systems, including the nearly ubiquitous MS-DOS (Microsoft Disk Operating System) dominated the market. Soon, the technical rigor that was required to manipulate these operating systems gave way to a more intuitive and user-friendly graphical user interface (GUI) that provided pull-down menus and picture icons to click on.

The concept of user friendliness has part of its origin in the human-factors laboratory at the Stanford Research Institute (SRI). It was at SRI that the mouse pointer was born in 1963. SRI was also the home of John Linvill and Jim Bliss (interviewed for this series), who used its cadre of engineers and psychologists to conduct basic research on the Optacon (optical to tactile conversion), the first major piece of modern assistive technology for people who are blind.

The charge toward developing graphical computing for desktop computers was led by Steve Jobs and his colleagues at Apple. The Macintosh, introduced in 1984, threatened to turn the PC market on its ear, but it never did because of the powerful hold that IBM and Microsoft had on the personal and business computing markets. Bill Gates sensed the potential power of the GUI. As early as the mid-1980s, he began to develop what would eventually become the Windows series of graphically based operating systems. After a few false starts, version 3.0 was introduced in 1990.

Meanwhile, by the late 1980s, users who were blind had happily caught up to their sighted computing brethren. Speech-synthesis technology faithfully rendered DOS-based operating and applications software, such as MS-DOS, WordPerfect, and Lotus 1-2-3. However, it could not function in the GUI environment.

According to Alistair D. N. Edwards, in his 1994 article in Information Techology and Disabilities on the rise of the GUI <www.rit.edu/~easi/itd/itdv02n4/article3.htm>, Larry Scadden (interviewed for this series) was one of the first to understand that the GUI that was built into the Macintosh computer would be a threat to the viability of people who were blind in the computer world. Edwards developed the Sound Track, a research tool that demonstrated that the GUI could be converted into meaningful sounds. The Sound Track was followed by Window Bridge, the first commercial screen reader, produced by Syntha-Voice Computers in 1992. Other screen readers quickly followed, including those produced by Doug Geoffray and Ted Henter, the pioneers who were interviewed for this series. By the time Microsoft came out with Windows 95, JAWS for Windows and Window-Eyes had broken the graphical computing barrier.

Information Superhighway

The latest versions of most assistive technology today devote most of their energy to making the World Wide Web accessible to users with disabilities. Why all this effort? In the 1990s, the Clinton administration, recognizing the power of computers to connect people to vast caches of information that are stored in other computers, launched an initiative to close the "digital divide" by ensuring that public schools had at least one computer connected to the Internet. The Internet had its origins in the Arpanet, which was introduced in 1963. By 1966, problems that were related to how to connect a large number of computers had been worked out. Some of the solutions were derived from old telegraph traffic-management methods and newer techniques like computer time-sharing. By 1975, e-mail had been invented and was slowly gaining in popularity. By the late 1980s, plentiful PCs caused so much information (including a proliferation of document transfer) to accumulate that something had to be done to organize it.

The World Wide Web was designed as a vast organizing tool. Its use of hypertext to enable users to go from one concept to the next without overly "crowding" each informational "level," multimedia, and file transfer protocols have made "surfing" the web an enjoyable prospect for everyone, albeit occasionally difficult for nonvisual users.

Doug Geoffray (Window-Eyes) credited Artic Technologies with developing the concept of the virtual web page. Users who are blind experience a web page after it is first analyzed and interpreted by the screen-reading software. Unlike previous approaches that literally rearranged the page, the current approach maintains a normal appearance for sighted users.

Today, the issues are no longer strictly those of organizing and transmitting information, although the widespread digitization of library collections will require centralized planning. Problems have moved to the social and political realms. Certainly, access for everyone, especially people with disabilities, is foremost. Children must be protected from pornography, and all users will have to learn how to manage their newfound ability to transmit information quickly and occasionally reflexively. Information overload is a danger, if not already present. Finally, poor people and those in developing nations who still do not possess computers and a supporting infrastructure in their communities are barred from availing themselves of the vast richness of the web.

Speculations

Certainly, the increasing speed of microchips, memory storage capacities, and new developments in optical transmission technologies will be important continuations of the pattern of building smaller and more powerful information-processing machines. If people who are blind are to have access to these devices, mainstream manufacturers will have to be convinced to use universal design concepts so that complex menu systems will be accessible via speech.

Perhaps more flexible and dynamic materials will be produced that will enable the development of exponentially less expensive tactile graphic technology (including refreshable braille), as Deane Blazie and others hope. In addition, if pioneers like Ray Kurzweil and Ron Morford get their way, intelligent machines will not only scan and recognize complex visual material (including environmental objects), but will be able to provide verbal renditions that will not only make sense, but sound as natural through instantaneous synthesis as current digital speech technologies do.

What is in our future? If recent trends are any indication, the near future will witness the increasing portability of devices that will become more and more multidimensional. As chip technology becomes even more sophisticated, it may be possible (as some have speculated) that a single handheld instrument will be able to act as an omnibus control and communication device, controlling household appliances; interfacing with the web; storing huge amounts of data; scanning the environment; acting as a navigational tool; and, of course, being a telephone.

Materials technology will enable input devices, such as keyboards, and output devices, such as monitor screens, to contract for portability and storage and expand when needed. Wouldn't it be amazing to have a keyboard that folds, rolls up, or in some other way shrinks to portable size and then expands to user-friendly dimensions when needed? The same pertains to easily portable video monitors that may expand enough for a person with low vision to see the information output from a device easily.

I hope that technology will become more multisensory and intelligent. Voice and optical recognition will be completely accurate and easy to use. A futuristic version of the Kurzweil-National Federation of the Blind Reader, for example, may, in addition to analyzing visual patterns, interface, through its auditory, tactile, and chemical recognition sensors, with onboard and web-based software to, say, tell you the name of a flower not only from its appearance, but from its texture and smell.

Deane Blazie believes that many more children will learn braille in school, increasing the need for simple and inexpensive portable devices. This situation will require braille display technology that costs much less than the current piezo-electric cells do. Blazie would like to see devices carry a maximum of 40 cells across and 4 lines up and down.

Perhaps Oleg Tretiakov or someone like him will use the latest technologies to develop a new, improved, and less expensive Optacon. Deane Blazie would like the Optacon to contain "More intelligence in the software so that as you are reading something or scanning something, you could actually have the software do some signal processing to maybe increase the contrast or help you follow a line or help you track."

Occupations for people who are blind will continue to expand. Jerry Kuns hopes that youngsters who are blind will obtain more work experience by taking advantage of computers. Deane Blazie and Bill Gerrey believe that more people who are blind will become engineers. Computerized technologies, better math-learning aids (Henter Math), and even robotic devices will assist people who are blind to function efficiently in occupations that they hitherto avoided. According to John De Witt (interviewed for this series), modern technologies will make "traditional occupations, such as radio broadcasting, as well as unheard of ones, such as architectural planning, feasible for people who are blind." Access to information will be a human right (Jim Fruchterman), people who are blind will not lose jobs because of the lack of access (Doug Geoffray), and governmental programs will be open to the development of innovative technology (Fred Gissoni and Larry Scadden).

What will the future of the blindness assistive technology industry be like? Although large companies, such as Freedom Scientific and the HumanWare Group, are inevitable, most of the legends who were interviewed for this series believe there is still room for small companies. Small companies, they speculate, will generate many of the new and creative inventions. As in the past, when their "inventive" phase is over and the production and marketing of products become their primary need, they will be absorbed by larger ones. The experience of Telesensory notwithstanding (Jim Halliday, Bob Keenan, and Jackie Wheeler), larger companies will continue to struggle with their management styles. Engineers and sales corps will vie for dominance.

Will prices for blindness assistive technology ever go down? The blindness technology market will always remain small, and the lack of an economy of scale will keep prices high. However, the use of off-the-shelf hardware and software and their exponentially increasing power versus the cost ratio will help lower research, development, and consumer costs (Jim Fruchterman, Ray Kurzweil, and Larry Israel).

Are there any untapped markets? Russell Smith, Larry Israel, and Jim Bliss (who were interviewed for this series) agree that the low vision market is underserved and that there is lots of room for new products that will aid individuals who can benefit from magnification. Older people who want easy-to-use yet powerful devices constitute a large and potentially lucrative market.

Will the assistive technology profession continue to evolve? John De Witt would like to see assistive technology trainers become professionally certified, but he has doubts about whether a fair and current certification examination can be developed. He prefers to concentrate the efforts of his company on providing courseware, user guides, and in-service training so that current and future professionals will have access to the best tools they can have.

Whatever the future, we can rest assured about this: People who are blind or have low vision will continue to assert themselves in order to take full advantage of the technologies that have propelled our world into the information age and beyond.

Author
Anthony Candela
Article Topic
History