Mobile Platforms, User Experience

What I Hate About The Apple Watch… and Why It Will Stay On My Wrist

The long and short of it is: I got one two months ago; I got it in order to better understand where this type of wearable device is going — what it enables that wasn’t possible before and how it will affect our digital life; Like many others I used to be a bit of a watch aficionado but let go of my watches many years ago when I realized my cellphone showed the time; I am an early adopter of sorts but not really a digital junkie; And I’ve been in mobile (professionally) for 16 years now, and have typically seen convergence in this market, not divergence;

Based on this experience — this is my critique as well as my insight (ahem) about where this is going.

So first of all — What is the Apple Watch? I don’t know what the people in Cupertino had in mind, but based on what they delivered, it is really several things.

  • A watch that shows time, date, temperature etc. — Ha!
  • A health / fitness wearable
  • A notification / messaging wearable
  • … and a little tiny iPhone strapped to your wrist, sort-of

The first two categories above are generally well understood by at least early adopter consumers. The latter are newer and the jury’s still out on their utility / desirability. Now if you’re going to build something that people understand, you better deliver what they expect. So here are pet peeves #1 and #2:

#1: Can I please see the time when I want to?

The Apple Watch’s display is off most of the time, to conserve battery. It uses the accelerometer and some background processing to figure out it’s being looked at by the wearer. This works pretty well if my arm is extended (e.g. I’m standing up), but fails much too often when my arm is in my lap or on a desk. This is (a) frustrating and leads to (b) me jiggling the watch all over the place to get the display on, which initially leads the other people in the room to assume I’ve developed a tic (or worse) and often ends up with the conversation sidelining to the Apple Watch (hmm…) but not in a good light. Incidentally this is especially nagging with Siri interaction, which is supposed to start with a similar hand gesture and saying “Hey Siri”. Often it will turn off the display while I’m still talking to Siri because it will decide I didn’t mean to speak, after all.

#2: The Heart Rate Monitor Really Sucks

Heart rate monitoring when I’m on the couch is kinda cool for extreme quantified-selfers. Most people want heart rate monitoring when they are really exercising. More often then not, you will find the iPhone Watch showing you some totally irrelevant measurement taken long ago. For instance look at this photo, taken on a stepper/elliptical at the height of my workout:

This happens at least half the time, and seems to be a software problem rather than a hardware one, because when there is actually a recent measurement, it seems to be very accurate:

These consistent software issues bring me to an overall point that goes beyond the obvious:

#3. A Smart-watch is required to be, well, Smart

All too often there is poor attention to context, and therefore either silly interaction or too much user interaction required. One example are the “stand up” alerts. In keeping with the health keeper approach, the watch will alert you to stand up every hour… even if you’re obviously in a car and moving at 60 mph. It allows you to record your activity, but despite the fact that it measures your heart rate, speed etc. everything is manual — it can’t tell that you’re on a bike (despite your moving at 15 mph with an elevated heart-rate), or that your treadmill session is long over (despite your heart rate dropping to 50 and you being 100% stationary). Integration with the Health app on the iPhone isn’t great either, for instance it will bug you about not exercising despite your entering a 60-minute swimming session in the app manually (and painstakingly).

#4: A New Computing Paradigm Needs a New UX Paradigm

Moving beyond the basics of a watch-cum-activity-tracker to a new breed of computing device, Apple’s approach to delivering value revolves around snippets of information that are typically pushed to the end user. The combination of Notifications (straight out of the iOS Push mechanism) and Glances (a tiny-screen take on app widgets) alongside haptic alerts is supposed to provide a better medium for humans to remain hyper-connected without having to constantly stare at a “big” iPhone screen. In theory, that should allow people to be more intune with their surroundings and the people with them. In practice, it requires the emergence of new user experience practices.

It took years for desktop / web UX designers to master mobile UX, moving from “let’s cram the desktop experience onto a small screen” (and discovering no one wants to use it), to the current-day focus on what’s relevant and usable in a mobile app. Moving from iPhone apps to Watch glances / notifications will require a lot of trial and error before best practices emerge. We are in the early days where many apps are merely frustrating (e.g. Facebook Messenger — I can receive a message but the only response I can send is thumbs up). This is a topic that probably justifies a separate post. Let’s just say that currently some apps are useful, many are just there because management said “we must have an Apple Watch app when it launches” and product managers / designers let their inner mediocretin shine (hey I just invented a new word!).

The incredible useless Lumosity App

Another under-delivering technology at this stage are haptic alerts (taptics). Having the device strapped to your wrist makes vibrations a great way to draw your attention. But frankly I was hoping to be able to get more than a binary “Yo”. Case in point — navigation. I ride a motorcycle and I was really hoping that I could use Apple Maps navigation as a gentle “GPS on your wrist” that I could use without looking at / listening to. But for the love of me I can’t figure out when it says “go left” (three taps?) and when it says “go right” (a series of angry buzzes?).

So Why Can’t I Leave Home Without It?

In truth, this is hard for me to qualify, but three weeks into the experience I found myself leaving home without it one day and feeling, well, naked.

For one, the Apple Watch grows on you. You get used to be able to getting the time without getting out your phone, Siri-on-your-wrist makes a lot of sense (especially in the car), etc. etc.

Maybe even more salient is how lazy we are. I found myself preferring to check some info on the watch rather than on the phone because the watch was strapped to my wrist, whereas the phone was all the way on the other end of the coffee table, requiring the considerable effort of stretching out, reaching over and clicking a button. This is not unlike the reason we all do email on the iPhone even at home, or in front of our desks, despite our perfectly good laptops being in the next room or even right in front of us.

And then there’s the eco-system. The Apple Watch is useful out-of-the-box, cause it syncs with your iPhone, iPad etc. And while a lot about that eco-system is imperfect from a software perspective, it’s still the most complete one out there. Which makes things even more convenient by saving you the hassle of loading it up with stuff, setting stuff up etc. Did I mention people don’t like hassle?

So while the current Apple Watch is definitely a version 1, and while Apples software people (mostly) have a lot of work to do, if there’s one thing I learned about consumer tech over the last 15 years it is that if something new is more convenient for people, then (most) other things being equal, they will easily get used to it and not be able to go back to the old ways. The Apple Watch makes some things more convenient and accessible, and as some of these are already things we do habitually, I believe it is here to stay.

Mobile Platforms, User Experience

Cortana Opens Up Where Siri Remains a Recluse

A Big Step Forward that Leaves Much To Be Desired Cortana in Halo 4

Given Apple’s and Google’s dominance, not many of us follow Microsoft news anymore. But instead of coming apart at the seams, it looks like Microsoft is adopting the only credible strategy – trying to out-innovate its competition to the point where it becomes a leader again. Signs of success are visible with Azure becoming the most credible competition to AWS, and it seems like some if its artificial intelligence efforts are just as ambitious. Against that backdrop, the recent Cortana / Windows Speech Platform developments are steps in the right direction.

App vs. Platform

Back in September 2013 ahead of the iPhone 5 / iOS 6 launch we were trying to predict Apple’s next move. Siri launched a year earlier on the iPhone 4S, and our wager at the time (at Desti / SRI) was that iOS 6 will open Siri-as-a-platform, allowing application developers to tie their offerings into the speech-driven UX paradigm, bringing speech interaction to a critical mass. Guess what – 18 months later, Siri is still a limited, closed service, and even Google Now API is still a rumor. So Microsoft’s announcements last week is a breath of fresh air and potentially a strategic move. In a nutshell – here are the main points of what was announced (and here’s a link to the lecture at //build/):

  • Cortana available on all Windows platforms
  • 3rd party apps can extend Cortana by “registering” to respond to requests, e.g. “Tell my group on Slack that we will meet 30 minutes later”
  • Requests can be handled in the app, or the app can interact using Cortana’s dialog UI

Extending Cortana to Windows 10 is an important step towards making voice interaction with computers mainstream. Making Cortana pluggable turns it into a platform that can hope to be pervasive through a network effect. However – what was announced leaves much to be desired with regards to both platform strategy and platform capabilities.

Cortana API: Speech without Natural Language is an Unfinished Bridge

I’m a frequent user of Siri. There are simply many situations where immediate, hands-free action is the quickest / safest way to get some help or to record some information. One of Siri’s biggest issues in such situations is its linear behavior – once it goes down a path, its very hard to correct and go down another. Consider for instance searching for gas stations while you’re driving down a highway – you get a list of stations and then it kind of cycles through them by order of distance (which is not very helpful if you’ve already past something). But going back and forth in that list (“show me the previous one”) or adding something to your intent (“show me the one near the airport”) is impossible. So often you end up going back to tapping and typing. That’s where a more powerful natural-language-understanding platform is needed, e.g. SRI’s VPA, or potentially wit.ai (now owned by Facebook) or api.ai. Cortana’s API allows you to create rudimentary grammars where you more-or-less need to literally specify exactly the sentences your app should understand, with rudimentary capabilities to describe sentence templates. There is no real notion of synonyms, pursuing intent completion (i.e. “filling all the mandatory fields in the form”), going back to change something etc. So this is more or less an IVR-specification platform, and we all know how we love IVRs, right?  If you want to do more – the app can get the text and “parse it” itself. That means that every app developer that wants to go beyond the IVR model needs to be learn how to build a natural-language-understanding system. That’s not how platforms work, and will not support the proliferation of this mode of interaction – crucial for making Cortana a strategic asset. Now arguably you could say – well, maybe they never saw it as a strategic asset, maybe they were just towing the line set by Apple and Google. That, however, would be a missed opportunity.

Speech-enabling Things is a Credible Platform Strategy

The Internet of Things is coming, and it is going to be an all-encompassing experience – after all, we are surrounded by things. For many reasons, these things will not all come from the same company. A company that will own a meaningful part of the experience of these things and make them dependent on its platform – for UI, for personal data, for connectivity etc. – that company would own the user experience for so much of the user’s world. In other words – give these device makers a standardized, integrated interaction platform for their devices and you own billions of consumers’ lives. Cortana in the clowd can be a (front-end to) a platform that 3rd party developers can use to speech-enable interactions with devices – whether they make the devices (e.g. the wearable camera that needs to upload images taken) or the experiences that use them (e.g. activating Pandora on your wireless speaker). Give these app / device developers a way to create this experience and connect it to the user’s personal profile (that he/she already accesses through their laptop, smartphone, tablet etc.) and you become the glue that holds the world together. This type of software-driven platform play is exactly the strategy Microsoft’s excelled at for so many years. To be an element of such a strategy, Cortana needs to be a cloud service. Not just a service available across Windows devices, but a cloud-based platform-as-a-service that can integrate with non-Windows Things. That can be part of a wider strategy of IoT-focused platform-as-a-service (for instance – connecting your things to your personal profile, so they can recognize you and interact in a personalized context), but mostly it needs to be Damn Good. Cause Google is coming. Building a platform ecosystem and then sucking it for all its worth used to be Microsoft’s forte. Cortana in the cloud, as a strong NLU and speech platform could be an important element of its comeback strategy.

Uncategorized

Desti Natural Search Comes of Age

Over the last couple of years, we’ve been playing with various user interface for iPad based (really – keyboard / screen / touch based) natural language interaction. This is (surprisingly?) different than voice-driven interaction, and an extremely effective way to search. My blog post about the evolution and the learnings is here:

http://blog.desti.com/index.php/2014/why-is-natural-search-awesome-and-how-we-got-here/

Desti Natural Language Search UI

 

Uncategorized

Google Glass from the Subject’s Perspective

Last week I had the honor and pleasure of being the first ever subject of a press interview conducted using Google Glass – followed up by a very interesting discussion with Robert Scoble. Here are some of the insights we’ve discussed, as well as some subsequent insights.

Screen Shot 2013-04-26 at 5.18.52 PM

Photography and Video will be impacted First

Consider how phone-based cameras have changed photography. My eldest daughter is almost 9 years old. We have a few hundred images of her first year, and about 10 short videos. My son is now 18 months old, and as my wife was preparing his first scrapbook album last week, she’s browsed through several thousand digital photos. On my phone alone, I have dozens of video clips of him doing everything you can imagine a baby doing and some things you probably shouldn’t. The reason is simple – we had our smartphones with us, they take good photos and store them. And should I mention Instagram?

Google Glass takes this to the extreme. With your smartphone you actually have to reach for your pocket / bag, click the phone app, point and shoot. Google Glass is always there, immediately available, always focused on your subject, and hands-free. Video photography through Google Glass is vastly superior for the simple reason that your head is the most stable organ in your body. What all of this comes down to is simply that people will be shooting stills and video all the time. Have you seen those great GoPro clips? Now consider having a GoPro camera on you, ready and available – perpetually. There will not just be a whole new influx of images and video but new applications for these too. Think Google StreetMaps everywhere, because the mere fact a human looked somewhere, means it’s recorded in some server. In the forest, in your house, and in your bathroom. Not sure about the latter? Check out Scoble’s latest adventures…

Useful Augmented Reality – Less will be more

Having information overlaid on top of your worldview is probably the sexiest feature from the perspective of us geeks. The promise of Terminator-vision / fighter-pilot displays provides an instant rush of blood to the head. And surely overlaying all of the great Google Places info on places, Facebook (well – Google+) info on people, and Google Goggles info on things – will be awesome, right?

Well, my perspective is a little different. After the initial wow effect, most of these will be unwanted distractions. Simply put – too many signals become noise, especially when it’s human perception that is concerned. This lesson has already been learned with similar systems in aerospace settings – and there the user is a carefully selected, highly trained individual, not an average consumer.

The art and science will be figuring out which of the hundreds of subjects visible is actually of interesting enough to “augment”. This will require not just much better and faster computer vision (hard!) but much better and deeper understanding of these subjects – which one’s really special for me, given the context of what I’m doing, what makes it so, and when to actually highlight it. Give me too much signal and I will simply tune out, or simply – take the damn thing off.

Achieving this requires a deeper understanding both of the world and of the individual. Deeper, more detailed POI databases (for places), product databases (for objects), and more contextual information about the people around me, what their contexts are – and what is mine. It is almost surprising to what degree this capability is non-existent today.

Initially – Vertical Applications Will be Key

Consider the discussion of video photography above. Now put Google Glasses on every policeman and consider the utility of simply recording every interaction these people have with the public. Put Google Glasses on every trainee driver and have them de-brief using the recorded video. Or just take it with you to your next classroom. Trivial capabilities like being able to tag an interesting point in time and immediately go back to it when you re-play – how useful is that?

And considering augmented reality – think of simple logistic applications, like searching a warehouse, where the objects are tagged with some kind of QR code, and a simple scan with your eyes allows you to get a visual cue where they are. The simple applications will deliver immense value, drive adoption, experience, and through those – curiosity and new, further reaching ideas.

And if you stuck around this long – here are my most amazing revelation:

  • Wearing Google Glass grows your facial hair!

Proof:

Sergey Brin Google Glass       Scoble Google Glass         Tim Google Glass

  • Google Glass vide makes you photogenic – watch Scoble’s interview of me and compare to my usual ugliness…
Uncategorized

The Case for Siri

Since Siri’s public debut as a key iPhone feature 18 months ago, I keep getting involved in conversations (read: heated arguments) with friends and colleagues, debating whether Siri is the 2nd coming or the reason Apple stock lost 30%. I figure it’d be more efficient to just write some of this stuff down…

siri icon

Due Disclosure:

I run Desti, an SRI International spin-out that is utilizes post-Siri technology. However, despite some catchy headlines, Desti is not “Siri for Travel”, nor do I have any vested interest in Siri’s success. What Desti is, however, is the world’s most awesome semantic search engine for travel, and that does provide me some perspective on the technology.

Oh, and by the way, I confess, I’m a Siri addict.

Siri is great. Honest.

The combination of being very busy and very forgetful, means there are at least 20 important things that go through my mind every day and get lost. Not forever – just enough to stump me a few days later.  Having an assistant at my fingertips that allows me to do some things – typically set a reminder, or send an immediate message to someone – makes a huge difference in my productivity. The typical use-case for me is driving or walking, realizing there is something I forgot, or thinking up a great new idea and knowing that I will forget all about it by the time I reach my destination. These are linear use cases, where the action only has a few steps (e.g. set a reminder, with given text, at a given time) and Siri’s advantage is simply that it allows me to manipulate my iPhone immediately, hands-free, and complete the action in seconds. I also use Siri for local search, web search and driving directions.

Voice command on steroids – is that all it is?

Frankly – yes. When Siri made its public debut as an independent company, it was integrated with many 3rd party services that were scrapped and replaced with deep integration with the iPhone platform when Apple re-launched it. Despite my deep frustration with Siri not booking hotels these days, for instance (not), I think the decision to do one thing really well – provide a hands-free interface to core smartphone functionality (we used to call it PIM, back in the days), was the right way to go. Done well, and marketed well, this makes the smartphone a much stronger tool.

But I hate Siri. It doesn’t understand Scottish and it doesn’t tell John Malkovich good jokes

As mentioned, I’ve run into a lot of Siri-bashers in the last year. Generally they break down into two groups. The people who say Siri never understands them, and the people who say Siri is stupid. I’m going to discuss the speech recognition story in a minute (SRI spin-out, right?) but regarding the latter point I have to say two things. First, most people don’t really know what the “right” use-cases for Siri are. Somewhere between questionable marketing decisions and too little built-in tutorial, I find that people’s expectations of Siri are often closer to a “talking replacement for Google, Wikipedia and the bible” than to what Siri really is. That is a shame; because the bottom line is that it is under-appreciated by many people who could really put it to good use. Apple marketing is great, but it’s better at drawing a grand vision than it is at explaining specific features (did I mention my loss on my AAPL?). While the Siri team has done great work at giving Siri a character, at the end of the day it should be a tool, not an entertainment app (my 8-year old daughter begs to differ, though).

OK, but it still doesn’t understand ME

First, let me explain what Siri is. Siri is NOT voice-recognition software. Apple licenses this capability from Nuance. Siri is a system that takes voice recognition output – “natural language”, figures out what the intent is – e.g send an email, then goes through a certain conversational workflow to collect the info needed to complete that intent. Natural language understanding is a hard problem, and weaving multiple possible intents with all the possible different flows is complex. It is hard because there is a multitude of ways for people to express the same intent, and errors in the speech recognition add complexity. Siri is the first such system to do it well and certainly the first one to do it well on such a massive scale.

So what? If it doesn’t understand what I said, it doesn’t help me.

That is absolutely true. If speech is not recognized – garbage in, garbage out. Personally I find that despite my accent Siri usually works well for me, unless I’m expressing foreign names, or there is significant ambient noise (unfortunately, we don’t all drive Teslas). There are however some design flaws that do seem to repeat themselves.

In order to improve the success rate of the automatic speech recognizer (ASR), Siri seems to communicate your address book to it. So names that appear in your address book are likely to be understood, despite the fact they may be very rare words in general. However this is often overdone, and these names start dominating the ASR output. One problem seems to be that Nuance uses the first and last names as separate words, so every so often I will get “I do not know who Norman Gordon is” because I have a Norman Winarsky and a Noam Gordon as contacts. I believe I see a similar flaw when words from one possible intent’s domain (e.g. sending an email) are recognized mistakenly when Siri already knows I’m doing something else (e.g. looking at movie listings).

This probably says something about the integration between the Nuance ASR and Apple’s Siri software. It looks like there is off-line integration – as in transferring my contacts’ names a-priori, but no real-time integration – in this case Siri telling the ASR that “Norman Gordon” is not a likely result. Such integration between the ASR and the natural language understanding software is possible, but often complex not just for technical reasons but for organizational reasons. It requires very close integration that is hard to achieve between separate companies.

So when will it get better?

It will get better. Because it has to. Speech control is here to stay – in smartphones as well as TVs, cars and most other consumer electronics. ASRs are getting better, mostly for one reason. ASRs are trained by listening to people. The biggest hurdle is how much training data they have. In the early days of ASRs, decades ago, this consisted of “listening” to news commentators – people with perfect diction and accent, in a perfect environment. In the last year, more speech sample data was collected through apps like Siri then probably in the two decades prior, and this data is (can be?) tagged with location, context and user information, and is being fed back into these systems to train them. And as this explanation was borrowed from Adam Cheyer, Siri’s co-Founder and formerly Siri’s Engineering Director at Apple – you better believe it. We are nearing an inflection point, where great speech recognition is as pervasive as internet access.

So will Siri then do everything?

That’s actually not something I believe will happen as such. Siri is a user interface platform that has been integrated with key phone features and several web services. But to assume it will be the front-end to everything is almost analogous to assuming Apple will write all of the iOS apps. That is clearly not the case.

However – Siri as a gateway to 3rd party apps, as an API that allows other apps that need the hands-free, speech-driven UI to integrate into this user interface, could be really revolutionary. Granted – app developers will have to learn a few new tricks, like managing ontologies, resolving ambiguity, and generally designing natural language user experiences. Apple will need to build methodology and instruct iOS developers, and frankly this is a tad more complex than putting UI elements on the screen. Also I have no idea whether Siri was built as a platform this way, and can dynamically manage new intents, plugging them in and out as apps are installed or removed. But when it does, it enables a world where Siri can learn to do anything – and each thing it “learns”, it learns from a company that excels at doing it, because that is that third party’s core business.

… and then, maybe, a great jammy dodger bakery chain can solve the wee problem with Scotland with a Siri-enabled app.

Oh, and by the way – you can learn more about Siri, speech, semantic stuff and AI in general at my upcoming SXSW 2013 Panel – How AI is improving User Experiences. So come on, it will be fun.

Mobile Platforms, Online Media

My Birthday Gift: The Kindle Fire, and Why It’s The First Credible Android Tablet

Over the past 6 months, I’ve been watching perplexed as vendor after vendor launched Android Tablets into the market with no success. Perplexed for a simple reason – I could not understand how they expected consumers to buy their $559, $499 or even $399 tablets when they could get an iPad 2 for $499 and get the real deal – the TRUE status symbol, the best content & app eco-system. What were Samsung, Motorola, Dell and Asus thinking, I was wondering. Was it a shortage / price of components that pushed them to that price bracket? Was it protecting the brand at all costs, even failure?

A couple months ago, I asked a question on Quora and the results were staggering – over 20:1 for iPad.

So what has changed?  The $199 Kindle Fire. You can get two of those, and still have money for another holiday gift.

Amazon’s Kindle is an ecosystem, not a device. Amazon sees it as a way to make sure you buy all your content – books, music, TV – from Amazon. Just yesterday they announced the streaming deal with FOX TV – more free content for Amazon Prime subscribers. Guess which devices will feature it? Remember Sony’s Howard Stringer’s announcement a few weeks ago – “Apple makes an iPad, but does it make a movie?“. Amazon doesn’t make them, but it sure-as-hell moves them around. In a move right out of Steve Jobs’ books, Amazon is tying it all together – device, app store, content store, streaming rights (with free content for Prime members), e-commerce for physical goods, payment options (from one-click to credit cards), cloud storage, even a loyalty program!

Kindle now touches everything Amazon does, and so many other companies. It threatens Netflix streaming – Amazon is securing more content for Prime members, and has a sound pay-TV model with a complete eco-system around it and it obliterates all other Android tablet manufacturers volume forecasts for the holiday season (a $200 rival with a strong brand behind it).

And it’s a credible contender for Apple’s eco-system. It is as broad, as far reaching, and goes even further with physical e-commerce embedded.

Probably the only risk is execution. If the software / hardware is good enough (defined as – better than most Android implementations), this will make a huge dent in the market. iPad will become the high-end product, but Android, through Kindle, could be the mass-market. Not so different from iPhones and Androids, actually.

My pre-order is in.