Cars Hear You, but When Will They Get You?

By Expectlabs @ExpectLabs

Anticipatory computing hits the road! Cars are learning to synthesize multiple locational, temporal, and auditory cues in order to make intelligent guesses as to what their passengers want. Check out this great article from Auto News, featuring quotes from Expect Labs’ own Tim Tuttle:

Almost all vehicles sold in America today are programmed to respond to a driver’s voice commands. But still on the frontiers of automotive research are vehicles that can determine what a driver really wants — and respond intelligently.
Getting to that point is a profound engineering challenge that is leading researchers beyond voice recognition and into the realm of artificial intelligence. It will require auto companies, suppliers and their technology partners to delve into the building blocks of language and to teach computers to understand humans as well as humans understand one another.
It’s one of the “hardest, long-standing challenges in artificial intelligence,” says Tim Tuttle, CEO of Expect Labs, a San Francisco company that works on refining voice-recognition technology.
For Expect Labs, context is everything. Its MindMeld platform helps app developers and device makers integrate voice commands with other inputs to make more sense of the directions. For instance, if a driver is in the car at 8 a.m. and asks the navigation system to find a coffee shop, the car can surmise that the driver is likely on the way to work and can identify a coffee shop along the usual route. The idea is that these voice-powered interfaces should be able to process more than the simple command and respond the way a human would.

Read more here.

Photo licensed under CC 2.0.