Languages Magazine

Unravelling the MindMeld API: Expect Labs in the Semantic Web

By Expectlabs @ExpectLabs

image

“This is truly a shift in the way that search occurs… we have so much information about what users are doing online that we can create accurate models to predict what a user might need.”

— Marsal Gavaldà, Director of Research at Expect Labs


More and more devices are being built to capture the rich contextual data that emanates from every action we take. Our Research Director recently sat down with the Semantic Web to explain how developers can leverage the MindMeld API to make sense of these data streams and provide rich intelligent experiences for their users.

In the piece, Gavaldà lays out the groundwork for our platform, and explains why voice-driven search is so imperative today. The article also highlights a few areas we’re focusing on, including making improvements to our speech recognition technology, beefing up our knowledge graph, and redefining the activity stream so it can include more contextual updates. Our team is also working on building comprehensive sample applications so developers can easily plug into our platform.

One main component of the MindMeld API is the knowledge graph, which enables developers to tap into an extensive database that ties together the connections between entities, making search results more relevant. Speech recognition is another important element, which is represented in our voice-powered MindMeld app. Our Research Director states that since speech is one of the most critical indicators of a person’s intent, one of our main initiatives is to increase its accuracy along with releasing broader language support. Right now, our API supports six languages, and we’re rolling out even more in the coming months!

Pore over the full piece here on the Semantic Web, and send us a tweet at @expectlabs if you have any questions!

(Source: Semantic Web)


Back to Featured Articles on Logo Paperblog