Since Siri’s public debut as a key iPhone feature 18 months ago, I keep getting involved in conversations (read: heated arguments) with friends and colleagues, debating whether Siri is the 2nd coming or the reason Apple stock lost 30%. I figure it’d be more efficient to just write some of this stuff down…
I run Desti, an SRI International spin-out that is utilizes post-Siri technology. However, despite some catchy headlines, Desti is not “Siri for Travel”, nor do I have any vested interest in Siri’s success. What Desti is, however, is the world’s most awesome semantic search engine for travel, and that does provide me some perspective on the technology.
Oh, and by the way, I confess, I’m a Siri addict.
Siri is great. Honest.
The combination of being very busy and very forgetful, means there are at least 20 important things that go through my mind every day and get lost. Not forever – just enough to stump me a few days later. Having an assistant at my fingertips that allows me to do some things – typically set a reminder, or send an immediate message to someone – makes a huge difference in my productivity. The typical use-case for me is driving or walking, realizing there is something I forgot, or thinking up a great new idea and knowing that I will forget all about it by the time I reach my destination. These are linear use cases, where the action only has a few steps (e.g. set a reminder, with given text, at a given time) and Siri’s advantage is simply that it allows me to manipulate my iPhone immediately, hands-free, and complete the action in seconds. I also use Siri for local search, web search and driving directions.
Voice command on steroids – is that all it is?
Frankly – yes. When Siri made its public debut as an independent company, it was integrated with many 3rd party services that were scrapped and replaced with deep integration with the iPhone platform when Apple re-launched it. Despite my deep frustration with Siri not booking hotels these days, for instance (not), I think the decision to do one thing really well – provide a hands-free interface to core smartphone functionality (we used to call it PIM, back in the days), was the right way to go. Done well, and marketed well, this makes the smartphone a much stronger tool.
As mentioned, I’ve run into a lot of Siri-bashers in the last year. Generally they break down into two groups. The people who say Siri never understands them, and the people who say Siri is stupid. I’m going to discuss the speech recognition story in a minute (SRI spin-out, right?) but regarding the latter point I have to say two things. First, most people don’t really know what the “right” use-cases for Siri are. Somewhere between questionable marketing decisions and too little built-in tutorial, I find that people’s expectations of Siri are often closer to a “talking replacement for Google, Wikipedia and the bible” than to what Siri really is. That is a shame; because the bottom line is that it is under-appreciated by many people who could really put it to good use. Apple marketing is great, but it’s better at drawing a grand vision than it is at explaining specific features (did I mention my loss on my AAPL?). While the Siri team has done great work at giving Siri a character, at the end of the day it should be a tool, not an entertainment app (my 8-year old daughter begs to differ, though).
OK, but it still doesn’t understand ME
First, let me explain what Siri is. Siri is NOT voice-recognition software. Apple licenses this capability from Nuance. Siri is a system that takes voice recognition output – “natural language”, figures out what the intent is – e.g send an email, then goes through a certain conversational workflow to collect the info needed to complete that intent. Natural language understanding is a hard problem, and weaving multiple possible intents with all the possible different flows is complex. It is hard because there is a multitude of ways for people to express the same intent, and errors in the speech recognition add complexity. Siri is the first such system to do it well and certainly the first one to do it well on such a massive scale.
So what? If it doesn’t understand what I said, it doesn’t help me.
That is absolutely true. If speech is not recognized – garbage in, garbage out. Personally I find that despite my accent Siri usually works well for me, unless I’m expressing foreign names, or there is significant ambient noise (unfortunately, we don’t all drive Teslas). There are however some design flaws that do seem to repeat themselves.
In order to improve the success rate of the automatic speech recognizer (ASR), Siri seems to communicate your address book to it. So names that appear in your address book are likely to be understood, despite the fact they may be very rare words in general. However this is often overdone, and these names start dominating the ASR output. One problem seems to be that Nuance uses the first and last names as separate words, so every so often I will get “I do not know who Norman Gordon is” because I have a Norman Winarsky and a Noam Gordon as contacts. I believe I see a similar flaw when words from one possible intent’s domain (e.g. sending an email) are recognized mistakenly when Siri already knows I’m doing something else (e.g. looking at movie listings).
This probably says something about the integration between the Nuance ASR and Apple’s Siri software. It looks like there is off-line integration – as in transferring my contacts’ names a-priori, but no real-time integration – in this case Siri telling the ASR that “Norman Gordon” is not a likely result. Such integration between the ASR and the natural language understanding software is possible, but often complex not just for technical reasons but for organizational reasons. It requires very close integration that is hard to achieve between separate companies.
So when will it get better?
It will get better. Because it has to. Speech control is here to stay – in smartphones as well as TVs, cars and most other consumer electronics. ASRs are getting better, mostly for one reason. ASRs are trained by listening to people. The biggest hurdle is how much training data they have. In the early days of ASRs, decades ago, this consisted of “listening” to news commentators – people with perfect diction and accent, in a perfect environment. In the last year, more speech sample data was collected through apps like Siri then probably in the two decades prior, and this data is (can be?) tagged with location, context and user information, and is being fed back into these systems to train them. And as this explanation was borrowed from Adam Cheyer, Siri’s co-Founder and formerly Siri’s Engineering Director at Apple – you better believe it. We are nearing an inflection point, where great speech recognition is as pervasive as internet access.
So will Siri then do everything?
That’s actually not something I believe will happen as such. Siri is a user interface platform that has been integrated with key phone features and several web services. But to assume it will be the front-end to everything is almost analogous to assuming Apple will write all of the iOS apps. That is clearly not the case.
However – Siri as a gateway to 3rd party apps, as an API that allows other apps that need the hands-free, speech-driven UI to integrate into this user interface, could be really revolutionary. Granted – app developers will have to learn a few new tricks, like managing ontologies, resolving ambiguity, and generally designing natural language user experiences. Apple will need to build methodology and instruct iOS developers, and frankly this is a tad more complex than putting UI elements on the screen. Also I have no idea whether Siri was built as a platform this way, and can dynamically manage new intents, plugging them in and out as apps are installed or removed. But when it does, it enables a world where Siri can learn to do anything – and each thing it “learns”, it learns from a company that excels at doing it, because that is that third party’s core business.
… and then, maybe, a great jammy dodger bakery chain can solve the wee problem with Scotland with a Siri-enabled app.
Oh, and by the way – you can learn more about Siri, speech, semantic stuff and AI in general at my upcoming SXSW 2013 Panel – How AI is improving User Experiences. So come on, it will be fun.