… as usual these days – published on Medium.
This is your very first post. Click the Edit link to modify or delete it, or start a new post. If you like, use this post to tell readers why you started this blog and what you plan to do with it.
Given I now split my writing between this blog, Medium, Linkedin and branded publications, here’s a list of links to things published elsewhere:
What’s wrong with the state-of-the-art and why Dialog Management is the missing layer.
The basic building blocks of products like Siri, Alexa, Cortana and Chris, as well as messenger bots.
About going from single-use bots to long-term engagement with intelligent agents.
Where bots and AI meet the needs of corporate travelers and the TMCs serving them.
iMessage integration in iOS 10, the new user experiences it enables – and how they are superior to the current state of Facebook’s messaging apps.
Over the last couple of years, we’ve been playing with various user interface for iPad based (really – keyboard / screen / touch based) natural language interaction. This is (surprisingly?) different than voice-driven interaction, and an extremely effective way to search. My blog post about the evolution and the learnings is here:
Last week I had the honor and pleasure of being the first ever subject of a press interview conducted using Google Glass – followed up by a very interesting discussion with Robert Scoble. Here are some of the insights we’ve discussed, as well as some subsequent insights.
Photography and Video will be impacted First
Consider how phone-based cameras have changed photography. My eldest daughter is almost 9 years old. We have a few hundred images of her first year, and about 10 short videos. My son is now 18 months old, and as my wife was preparing his first scrapbook album last week, she’s browsed through several thousand digital photos. On my phone alone, I have dozens of video clips of him doing everything you can imagine a baby doing and some things you probably shouldn’t. The reason is simple – we had our smartphones with us, they take good photos and store them. And should I mention Instagram?
Google Glass takes this to the extreme. With your smartphone you actually have to reach for your pocket / bag, click the phone app, point and shoot. Google Glass is always there, immediately available, always focused on your subject, and hands-free. Video photography through Google Glass is vastly superior for the simple reason that your head is the most stable organ in your body. What all of this comes down to is simply that people will be shooting stills and video all the time. Have you seen those great GoPro clips? Now consider having a GoPro camera on you, ready and available – perpetually. There will not just be a whole new influx of images and video but new applications for these too. Think Google StreetMaps everywhere, because the mere fact a human looked somewhere, means it’s recorded in some server. In the forest, in your house, and in your bathroom. Not sure about the latter? Check out Scoble’s latest adventures…
Useful Augmented Reality – Less will be more
Having information overlaid on top of your worldview is probably the sexiest feature from the perspective of us geeks. The promise of Terminator-vision / fighter-pilot displays provides an instant rush of blood to the head. And surely overlaying all of the great Google Places info on places, Facebook (well – Google+) info on people, and Google Goggles info on things – will be awesome, right?
Well, my perspective is a little different. After the initial wow effect, most of these will be unwanted distractions. Simply put – too many signals become noise, especially when it’s human perception that is concerned. This lesson has already been learned with similar systems in aerospace settings – and there the user is a carefully selected, highly trained individual, not an average consumer.
The art and science will be figuring out which of the hundreds of subjects visible is actually of interesting enough to “augment”. This will require not just much better and faster computer vision (hard!) but much better and deeper understanding of these subjects – which one’s really special for me, given the context of what I’m doing, what makes it so, and when to actually highlight it. Give me too much signal and I will simply tune out, or simply – take the damn thing off.
Achieving this requires a deeper understanding both of the world and of the individual. Deeper, more detailed POI databases (for places), product databases (for objects), and more contextual information about the people around me, what their contexts are – and what is mine. It is almost surprising to what degree this capability is non-existent today.
Initially – Vertical Applications Will be Key
Consider the discussion of video photography above. Now put Google Glasses on every policeman and consider the utility of simply recording every interaction these people have with the public. Put Google Glasses on every trainee driver and have them de-brief using the recorded video. Or just take it with you to your next classroom. Trivial capabilities like being able to tag an interesting point in time and immediately go back to it when you re-play – how useful is that?
And considering augmented reality – think of simple logistic applications, like searching a warehouse, where the objects are tagged with some kind of QR code, and a simple scan with your eyes allows you to get a visual cue where they are. The simple applications will deliver immense value, drive adoption, experience, and through those – curiosity and new, further reaching ideas.
And if you stuck around this long – here are my most amazing revelation:
- Wearing Google Glass grows your facial hair!
- Google Glass vide makes you photogenic – watch Scoble’s interview of me and compare to my usual ugliness…
Since Siri’s public debut as a key iPhone feature 18 months ago, I keep getting involved in conversations (read: heated arguments) with friends and colleagues, debating whether Siri is the 2nd coming or the reason Apple stock lost 30%. I figure it’d be more efficient to just write some of this stuff down…
I run Desti, an SRI International spin-out that is utilizes post-Siri technology. However, despite some catchy headlines, Desti is not “Siri for Travel”, nor do I have any vested interest in Siri’s success. What Desti is, however, is the world’s most awesome semantic search engine for travel, and that does provide me some perspective on the technology.
Oh, and by the way, I confess, I’m a Siri addict.
Siri is great. Honest.
The combination of being very busy and very forgetful, means there are at least 20 important things that go through my mind every day and get lost. Not forever – just enough to stump me a few days later. Having an assistant at my fingertips that allows me to do some things – typically set a reminder, or send an immediate message to someone – makes a huge difference in my productivity. The typical use-case for me is driving or walking, realizing there is something I forgot, or thinking up a great new idea and knowing that I will forget all about it by the time I reach my destination. These are linear use cases, where the action only has a few steps (e.g. set a reminder, with given text, at a given time) and Siri’s advantage is simply that it allows me to manipulate my iPhone immediately, hands-free, and complete the action in seconds. I also use Siri for local search, web search and driving directions.
Voice command on steroids – is that all it is?
Frankly – yes. When Siri made its public debut as an independent company, it was integrated with many 3rd party services that were scrapped and replaced with deep integration with the iPhone platform when Apple re-launched it. Despite my deep frustration with Siri not booking hotels these days, for instance (not), I think the decision to do one thing really well – provide a hands-free interface to core smartphone functionality (we used to call it PIM, back in the days), was the right way to go. Done well, and marketed well, this makes the smartphone a much stronger tool.
But I hate Siri. It doesn’t understand Scottish and it doesn’t tell John Malkovich good jokes
As mentioned, I’ve run into a lot of Siri-bashers in the last year. Generally they break down into two groups. The people who say Siri never understands them, and the people who say Siri is stupid. I’m going to discuss the speech recognition story in a minute (SRI spin-out, right?) but regarding the latter point I have to say two things. First, most people don’t really know what the “right” use-cases for Siri are. Somewhere between questionable marketing decisions and too little built-in tutorial, I find that people’s expectations of Siri are often closer to a “talking replacement for Google, Wikipedia and the bible” than to what Siri really is. That is a shame; because the bottom line is that it is under-appreciated by many people who could really put it to good use. Apple marketing is great, but it’s better at drawing a grand vision than it is at explaining specific features (did I mention my loss on my AAPL?). While the Siri team has done great work at giving Siri a character, at the end of the day it should be a tool, not an entertainment app (my 8-year old daughter begs to differ, though).
OK, but it still doesn’t understand ME
First, let me explain what Siri is. Siri is NOT voice-recognition software. Apple licenses this capability from Nuance. Siri is a system that takes voice recognition output – “natural language”, figures out what the intent is – e.g send an email, then goes through a certain conversational workflow to collect the info needed to complete that intent. Natural language understanding is a hard problem, and weaving multiple possible intents with all the possible different flows is complex. It is hard because there is a multitude of ways for people to express the same intent, and errors in the speech recognition add complexity. Siri is the first such system to do it well and certainly the first one to do it well on such a massive scale.
So what? If it doesn’t understand what I said, it doesn’t help me.
That is absolutely true. If speech is not recognized – garbage in, garbage out. Personally I find that despite my accent Siri usually works well for me, unless I’m expressing foreign names, or there is significant ambient noise (unfortunately, we don’t all drive Teslas). There are however some design flaws that do seem to repeat themselves.
In order to improve the success rate of the automatic speech recognizer (ASR), Siri seems to communicate your address book to it. So names that appear in your address book are likely to be understood, despite the fact they may be very rare words in general. However this is often overdone, and these names start dominating the ASR output. One problem seems to be that Nuance uses the first and last names as separate words, so every so often I will get “I do not know who Norman Gordon is” because I have a Norman Winarsky and a Noam Gordon as contacts. I believe I see a similar flaw when words from one possible intent’s domain (e.g. sending an email) are recognized mistakenly when Siri already knows I’m doing something else (e.g. looking at movie listings).
This probably says something about the integration between the Nuance ASR and Apple’s Siri software. It looks like there is off-line integration – as in transferring my contacts’ names a-priori, but no real-time integration – in this case Siri telling the ASR that “Norman Gordon” is not a likely result. Such integration between the ASR and the natural language understanding software is possible, but often complex not just for technical reasons but for organizational reasons. It requires very close integration that is hard to achieve between separate companies.
So when will it get better?
It will get better. Because it has to. Speech control is here to stay – in smartphones as well as TVs, cars and most other consumer electronics. ASRs are getting better, mostly for one reason. ASRs are trained by listening to people. The biggest hurdle is how much training data they have. In the early days of ASRs, decades ago, this consisted of “listening” to news commentators – people with perfect diction and accent, in a perfect environment. In the last year, more speech sample data was collected through apps like Siri then probably in the two decades prior, and this data is (can be?) tagged with location, context and user information, and is being fed back into these systems to train them. And as this explanation was borrowed from Adam Cheyer, Siri’s co-Founder and formerly Siri’s Engineering Director at Apple – you better believe it. We are nearing an inflection point, where great speech recognition is as pervasive as internet access.
So will Siri then do everything?
That’s actually not something I believe will happen as such. Siri is a user interface platform that has been integrated with key phone features and several web services. But to assume it will be the front-end to everything is almost analogous to assuming Apple will write all of the iOS apps. That is clearly not the case.
However – Siri as a gateway to 3rd party apps, as an API that allows other apps that need the hands-free, speech-driven UI to integrate into this user interface, could be really revolutionary. Granted – app developers will have to learn a few new tricks, like managing ontologies, resolving ambiguity, and generally designing natural language user experiences. Apple will need to build methodology and instruct iOS developers, and frankly this is a tad more complex than putting UI elements on the screen. Also I have no idea whether Siri was built as a platform this way, and can dynamically manage new intents, plugging them in and out as apps are installed or removed. But when it does, it enables a world where Siri can learn to do anything – and each thing it “learns”, it learns from a company that excels at doing it, because that is that third party’s core business.
… and then, maybe, a great jammy dodger bakery chain can solve the wee problem with Scotland with a Siri-enabled app.
Oh, and by the way – you can learn more about Siri, speech, semantic stuff and AI in general at my upcoming SXSW 2013 Panel – How AI is improving User Experiences. So come on, it will be fun.
Link to my guest post on TechCrunch
This coming Monday, it’s going to be exactly eleven years since I’ve first been introduced to the world of mobile software. It was my 29th birthday (yes my birthday’s this Monday) and I had a friend hook me up with friends of his who were making their first steps in enterpreneuring at a small Palm software house they called Common Sense Software. They were looking at someone who could turn it into a real business, while they actually found a different company (Watapa). Kelly, Ramel, Dan and Benny – this fabulous foursome, as I’ve later come to name them, helped me found MobiMate, and spend these last eleven years watching (and participating) in the emergence of the mobile software economy – all the way from Palm Pilot applications to the current mobile internet / apps world. Six years ago we honed down on the world of travel services and distribution, and added web applications to the mix. And with MobiMate’s / WorldMate’s track record of leading in this market – from technological advances through business model innovation, I believe the Vanguard is very much where I’ve been this whole time. Which is fitting, considering it’s an anagram of my name…
For the last few years, I’ve been posting actively on my WorldMate Founder’s Blog. It’s time to have something separate of that. The Vanguard is my own blog, unrelated to that company, and does not represent the company or its management.