In other words, in the old days, if you wanted to do something—navigate to the restaurant where you’ve got a dinner reservation—you might open a web browser and search for its address. But in the post-search world of context—in which our devices know so much about us that they can guess our intentions—your phone is already displaying a route to that restaurant, as well as traffic conditions, and how long it will take you to get there, the moment you pull your phone out of your pocket.
The above quote comes from here; in the same article, Marissa Mayer of Yahoo is quoted:
Yahoo CEO Marissa Mayer has been especially explicitly about what this new age of context means. “Contextual search aims to give people the right information, at the right time, by looking at signals such as where they’re located and what they’re doing—such as walking or driving a car,” she said at a recent conference. “Mobile devices tend to provide a lot more of those signals….When I look at things like contextual search, I get really excited.”
We have been talking about this for a while, and there have been clues that this idea of your phone/device knowing you and your whereabouts so well that everything is right there for you, as opposed to having to physically search it. You know this is a potential “next big thing” because all the major tech companies bought smaller companies that focus on it — like Apple buying Cue, Yahoo buying Aviate, Twitter buying Cover, etc. Google works with Everything.Me, which is profiled in the main article linked above; essentially, the goal is to tell you what you want to do and need to do it before you even know either process.
Larry Page (CEO of Google) referred to some of these problems at the TED 30th Anniversary conference this winter:
His idea is that basically, while we’ve come a very long way — Moore’s Law-type stuff — computers still aren’t really that smart, in that they lack any context around the information they’re providing. You could be Googling a restaurant in San Diego because you’re visiting San Diego and you think it’s nearby, or you could be Googling it because you’re on a business trip in London and want to show someone how nice the patio is. The mobile phone/computer/tablet doesn’t necessarily know, and that limits search to an extent — it requires the user to have to have a certain set of skills that not everyone has (especially non-digital natives). But if search gained context and thus became close (not entirely, but close) to an automatic process, that would shift the game pretty dramatically in terms of how people interact with their neighborhoods and overall worlds (it would also change the entire concept of marketing and advertising for local businesses, if you really think about it).
There’s another layer to this, though: the idea that “search,” as an idea, is a competitor of “social.” (More on that here too.) Maybe contextual search becomes a thing, but maybe social search gets to a high level before then and that becomes the new norm — after all, why would you Google something if you know you’ve had 15 friends who visited Paris and you can see everything they’ve done and liked therein? You trust your friends more than the broad, wide Internet, right? Social search right now — at least on Facebook, which is the big dog — kind of sucks. It’s nearly impossible to find anything truly relevant about a different city (or hell, even your own city) via that, despite the reputed possibilities of GraphSearch. (Apparently that’s “a five-year thing.”)
Ultimately, the bigger point here is this: the idea of the Internet started out, and continues to be, about connecting people back to information — be that cat memes (entertaining information), supermarket aisle categories (functional), or restaurant reviews and locations (functional/leisure). The race is on to see what concept can do this the best and the quickest — and whoever ultimately wins this race could shift the next 20 years of how we interact with our phones and computers.