The New Yorker has an interesting piece about the state of speech synthesis and voice recognition. Decades after the original notion of the “conversational computer” was first conceptualized, where are we?
Turns out, we’re not as far along as we thought we’d be. Scientists and researchers found out the hard way that modeling speech is waaaaaay harder than anyone thought it would be. Don’t go throwing that keyboard away just yet.
A fascinating read — we’ve come so far, and yet we are still so far away from the original vision of the talking computer.
Just happened upon a story from the University of Washington News describing a project to build a web-based screen reader — called WebAnywhere — to allow the visually impaired to surf the web.
Most people have heard of or even tried screen readers like Jaws. The problem with software like Jaws is that it must be installed and run on the computer a visual impaired person is using. This can be inconvenient if someone is traveling, or is otherwise away from home or work.
WebAnywhere is different in that it is web-based, so there is no software to install. Any computer can be made more accessible to the visually impaired as long as it has a browser. The software that runs WebAnywhere is available for review from Google Code.
This sounds like an outstanding project — I’ll have to pull the source code down and take a look at it.
It looks like a group of local investors may pick up where EarthLink left off.
This is good news, but more work is still needed to complete the network:
“We’re not anywhere near close to delivering a full service yet,” said local entrepreneur Richard A. Rasansky, who is on the board of the new company [that will take over the network]. “The network is not completed. It’s not so much a problem with how it’s built, it’s that it’s currently unbuilt.”