Languages Magazine

Introducing the MindMeld API!

By Expectlabs @ExpectLabs

Get a taste of our upcoming developer platform and learn how you can use it to infuse powerful contextual search and recommendations into any website or application. If you haven’t already, sign up here for early access. 

Keep an eye out for more videos that will explain the many ways developers can use our soon-to-be-released MindMeld API. 

TRANSCRIPT:

This is Tim Tuttle and I am the CEO of Expect Labs. In this video, I’d like to introduce a new product that Expect Labs is launching that we call the MindMeld API. And so, what the MindMeld API is is a developer platform and a cloud-based service that allows developers to build contextually driven content discovery into any application, or any device. It ‘s a brand new product. We are really excited about it, and let me explain at a high level how it works.

So, at its most basic form, what this platform does, what the MindMeld API does, is it takes continuous text inputs submitted to the platform by the developer, it analyzes them to try to understand the context, and then delivers a continuous stream of information about what the user context is, as well as recommendations for relevant content that can be useful in a wide variety of applications. And so that’s at a high-level what it does. The inputs that you submit to this platform are typically chunks of text, most commonly representing parts of a conversation, maybe from an SMS message, and then get sent to our platform continuously at any rate over any length of period of time. These text inputs can be optionally tagged with location information, and they can be associated with certain user accounts that have user attributes. That all gives us additional context about what the meaning, or what the importance of this conversation is.

On the backend, what our technology does is it attempts to analyze that information and find relevant information doing three things. The first thing that we do is when we take a piece of text input, we try to determine if it’s conversational information. If so, we then attempt to analyze both the grammar and syntax of that conversation and then try to understand the meaning of the words that are in that conversation. That’s step number one. We then use that analysis to do the second thing, which is to create a model that represents the evolving context of what’s happening in that user’s life. We call that our context model, and that model changes from second to second as we receive new information about the conversation, or as a user’s location goes from one place to another. And at every point in time that model, that context model, attempts to represent, as best as we can understand, what’s happening in that user’s life right now. And then we use that model to do the third thing. The third thing is, rather than wait for the user to ask a question, or ask for information, we use that model to proactively search for information across a variety of data sources that we make available, or that you tell us to access. For example, we can search a user’s social graph, or search content sources from the web, and we do that proactively in the background, constantly looking for correlations, constantly looking for information that might be pertinent to the conversation, the contextual string that we have. And we do that so that we can make it available through this developer API to the developer, and to the application, whenever the user wants it, through a very configurable and flexible API.

So, that’s at a very high level of how this MindMeld API works, and in the next set of videos, I will explain in much more specific detail how you can begin using it to build contextually powered applications.


Back to Featured Articles on Logo Paperblog