I'm sure some of you have heard of Google Glass already but if not, here's an interesting article on it.
“Glass will be revolutionary for the disabled,” says Rosalind Picard, founder of the Affective Computing Research Group at Massachusetts Institute of Technology’s Media Lab, whose focus is autism and communication technology.
“With facial analytics, it’s possible to, with the subject’s approval, have Glass scan a face and put up a green light if the person is intrigued, yellow if they’re confused or red if they’re bored,” she says. Then, chuckling, she adds, “It could even whisper at you during that date, ‘Hey, she’s losing interest.’”
Picard says speech recognition is getting so good that a deaf person soon could see a real-time transcript of what a friend is saying in Glass’ prism. A person with limited vision could take walking directions from Glass through its bone-conducting speaker housed in the right temple.
“One day soon, we’ll look at regular glasses the way we now look at old phones,” she says. “It will change things so much.”
What pleases Mark Perriello is that, thanks to the feedback-intensive Explorers program, Glass is being developed with input from the disabled.
“All too often, technologies are created and then people ask, ‘OK, what if people with disabilities need to use it?’” says the CEO of the American Association of People with Disabilities, a Washington, D.C.-based advocacy group founded in the wake of the 1990 Americans with Disabilities Act.
“From taking a picture with ease to helping those with low vision redefine their world,” Perriello says, “this has the possibility to level the playing field.”