Startups & Entrepreneurship

Startup CEO: You need a CEO Advisor

So you’ve been doing this for awhile now. Maybe you’ve gone through the initial excitement of idea / team / frenzied MVP development / seed funding. Maybe you’re even further along — reached some level of product-market fit, raised an A or B round, and are trying to grow the company and the metrics.

Here are some things you’ve most definitely noticed already:

No One Prepared You For This Job.

Unless you’ve been through the wringer successfully multiple times, sooner (usually) or later you’re going to be doing things and finding yourself in situations you’ve never been in before. Maybe it’s going to investor meeting after investor meeting and getting rejected despite your obvious preparation and great results. Maybe it’s dealing with a co-founder or board member that just doesn’t see things your way, or a troublesome star employee. Maybe it’s a VP of Sales that you need to hire and have no clue how to build a compensation plan for. Or maybe it’s even that meeting with the celebrity public company CEO that suggests to buy your company long before you were ready for any of these.

Having a Stanford MBA, reading everything you can about it in books with aptly named titles like The Hard Thing about Hard Things, even seeing it from the sidelines as an employee or co-founder helps, but you can’t learn to fly a plane from a book, or in a class, or by sitting in the flight engineer’s seat. The situations you will go through are never exactly what you read about, the people involved are never exactly the same, and your personal demeanor is not Steve Jobs’ or Ben Horowitz’.

Doing the right thing is easier mentally and emotionally when you are presented with some perspective — here’s where you are, here’s why you got here, here’s what typically happens afterwards if you do this or that.

Guess what — dozens of thousands of CEOs were in similar situations before. Without tapping some of that experience you’re flying blind.


It’s the loneliest job, and your state of mind counts.

Saying that a startup CEO is the loneliest job in the world is almost a cliche (don’t believe me? Google it!). No one is really, fully aligned with you.

Your co-founders share interests with you, but have a narrower perspective and typically only want to get their own job done well and go home expecting you to do yours (anything else and you have a different kind of problem).

Your investors want you to succeed, but don’t have the time and attention to understand exactly what you’re going through (more on that below).

Your employees are your employees. You have a responsibility for them, but you’re the driver and they are the passengers. The last thing they want to hear is that the driver isn’t sure where he’s going or how to drive.

Sooner or later, the loneliness and stress will get to you and affect the quality and the pace of your decision making. Furthermore it will affect how you relate to the people actually doing the work — writing the code, designing the products, meeting the clients. Lastly, it’s going to screw up your ability to persuade investors, partners and clients to give the company what it needs.

Your level-headedness affects results, and you need help maintaining it.

A good CEO Advisor brings perspective and experience and is with you in your predicament — but is also apart from it, because it’s NOT his full-time job. As such he is able to provide you a mirror, perspective, management advice, a soothing or urging voice — helping you perform.

You Need To Perform, Consistently

Even if you’re the most experienced CEO out there, doing phenomenally well in a break-out company, every day is going to present hard choices, time pressure and stress. Just like pro sports, the best athletes need the best coaches. To guide them, to help them deal with the challenges and the anxiety. Getting the most out of yourself is your duty to your shareholders, but you’re only human. So the best performing CEOs get the best coaches. Don’t believe me? Read The Trillion Dollar Coach. Equipping yourself and the company for growth is the solid business choice.

Every day in your life as a startup CEO offers an opportunity to screw up — and an opportunity to grow.

Who Can / Can’t Help You

Evidently, perspective and experience count. What you want is:

  • Someone who’s been in your shoes — direct experience as a startup CEO, because that’s the only way to understand both the business and the human aspects of what you’re going through.
  • Someone who’s seen it multiple times — perspective is a function of seeing things from multiple angles, experimenting with different solutions to similar issues. The more, the better.
  • Has time and attention, and is focused on your success – One can’t parachute into a situation last-minute and really provide assistance.
  • Communications style that’s right for you. If you don’t have chemistry and good communications, you will not share enough and he/she will not be able to guide you. Either you won’t be heard or you won’t really be listening.
  • Got the network to support you – A lot of what you’ll need is relationships – investors, partners, 3rd party advice. Not having a network is a red flag that the person hasn’t done enough / is not appreciated by others.

While most of these points are obvious, I’d like to re-iterate the first one. Many people can offer you specific advice on their area of expertise — lawyers, bankers, management consultants and so forth. But your job is to fuse these considerations, apply them to the conditions and constraints at hand and make decisions. That’s where the forces pulling you in different directions put so much mental and emotional strain on you.

People who’ve not been in that situation will have a very hard time helping you effectively, but won’t necessarily know that they’re wasting your time. They may provide bad advice because they only understand one aspect of the situation (like the lawyer who will focus on your legal exposure while totally disregarding the business opportunity / risk of not taking the risk…). They may provide good theoretical advice that totally misses the capabilities of your team to execute or the long term effects on them (like the McKinsey guy calculating in Mythical Man Months).

 “Best advice for a new CEO comes not from VCs or other ‘advisors’ people tend to put on PowerPoint slides, but from more experienced CEOs who have been in their shoes before…ideally multiple times over.“ – Bilal Zuberi, Lux Capital

Wait A Second, Isn’t That What My Board Members Are For?

Unfortunately, no.

Officially the board is there to provide corporate governance and oversight. What this translates to in most cases is really protecting investors’ interests. And while it’s generally in the shareholders’ interests that you do a good job, in the vast majority of cases they are not equipped to help you do it, for some or more of the following reasons:

Many of them have never been in your shoes

While many venture investors have entrepreneurial backgrounds, the majority are either finance guys who never worked at a startup let alone founded one or successful executives / early employees who’ve helped grow a tech company but not as the CEO nor necessarily in the early stages. These people may have seen a lot, but never felt what it’s like. Often investors boast of an entrepreneurial background based on a couple of years of experience as founders of a company. Guess what — whether you managed to get rich with your first company after a few years, or struggled for awhile and realized it’s easier to make money investing other people’s money — your experience as a startup CEO is limited. Then you have the corporate VC people, typically manned by corporate executives who are super-knowledgeable about their domain, but frankly have not spent a day of their lives working at a startup.

So what about the 20–30% who do have the right background?

If they’re good at their job — they are too busy

Doing a good job as a VC or professional angel means spending a lot of time at board meetings, deal-flow meetings and fund-raising for your fund. The best venture capitalists don’t have enough time / attention span to be with you in the trenches. VCs typically look at thousands of deals a year, many hundreds per partner. That’s multiple meetings per day for deal flow and a half day partner meeting every week. On top of that if you’re sitting on 5–10 boards you add about that number of formal meetings a month for “1:1 CEO Update” and a total of at least one board meeting per week which should gobble up a whole day with preparation. And then VCs have to fund-raise too, which means everything from schmoozing with LPs regularly to 6 months of preparations and road-shows. Bottom line is they don’t have time / attention to meaningful time focusing on your needs every week.

Finally, for the few that do…

There is a built-in conflict of interest.

When you have bad news (“we didn’t make our numbers this quarter” “my co-founder is not a good enough manager”), or just decisions that may conflict with the investors goals (“we have an acquisition offer but the investors don’t want to sell yet”), the help you need is with crafting the approach that will get these people on your side. Obviously you can’t do it with them.

Honestly, many investors don’t like CEOs that show vulnerability, it scares them into thinking they need a new CEO.

You need someone with whom you can be vulnerable.

OK, I’m Convinced — How Does It Work?

In my experience, to make this relationship a success requires a measure of discipline. This is not about getting $0.02 worth of advice, it’s about having a structure in which there is deep understanding and commitment to doing the hard things — whether it’s executing on difficult decisions, or changing habits and attitudes. On your end as the CEO you have to commit to:

  • Coming prepared to meetings / calls — make a list of the top 3–5 topics that are important to handle, then dive deep.
  • Truly listening, you should be responsive but not reactive.
  • Commit and deliver — agree on actions, then come back to the next meeting having really good excuses if you haven’t done them. Or better just do.
  • Set a framework — Meet weekly. Preferably at set times. Share an agenda, share action items. Report progress. It shows commitment and creates positive habits.

Don’t treat it as a therapy session. Treat it as a management process.

The advisor should be committed as well:

  • Commit to schedule and mutual expectations about process.
  • Always available for urgent stuff — via email, messaging, calls.
  • Involvement with co-founders, board members and possibly other execs as needed.
  • Using her network for your benefit – investors, experts, business partners.

Possibly the most important thing to remember is that this is a normal, welcome business activity, just like having board meetings or weekly management meetings. I mentioned Bill Campbell’s work (“The Trillion Dollar Coach”) with some of the most outstanding CEOs you’ve seen on the cover of BusinessWeek. These guys worked with him because he made them even better.

The CEO Advisor is not there because you are a flawed CEO, but because you are a CEO and you are human. There’s no stigma, it’s not Denpok Singh, the spiritual advisor from HBO’s Silicon Valley. It’s a business function helping the CEO be 10% better.

At an early stage company, a CEO that’s 10% better is often the difference between success and failure.

Mobility / Automotive, User Experience

The Real Tipping Point For Electric Cars: A Consumer’s View

Electrification is an inevitable trend. Whether driven by the need to avert climate change, governments wanting to boost a future industry, or the many advantages of electric propulsion – simplicity, cheap maintenance, torque… With autonomy generally accepted as something  that will take much  longer to deliver at scale, the automotive industry’s focus is turning back to Electrification as the disruptive wave that you either ride in or be drowned by. But when is it coming? And what will be the inflection point?

Electrification is THE disruptive theme in the automotive industry in the coming decade.


Despite all the Tesla press and the announcements by leading automakers that they are switching focus to electric vehicles, for instance VW’s announcement last month and Daimler’s farewell to IC engines earlier this year, the fact remains that EV sales represent small single-digits of the US vehicle market. According to EV Adoption less than 2% of sales in 2018, growing 50% YoY. While this rate of growth is significant, we are still looking at decades before internal combustion cars are eliminated. So where is the “iPhone moment” for this industry, and what will bring us there?

A prevailing opinion among industry experts is that it’s all about price. Get EVs to price-parity with traditional gas-powered cars and people will choose them. Bain & Co. in a recent article see not one but two distinct tipping points for EV adoption – one when “total cost of ownership” of an EV is below that of an ICE-powered car, and another when the purchase price drops below the purchase price of one. As we’re already past that  first point according to their study (gas and maintenance cost for EVs are already significantly lower, reducing TCO over a car’s lifetime), a blunt way to express their point is – “consumers are too stupid to realize that EVs cost less to own, so we will have to wait until EVs also cost less to buy.” (not being employed by Bain, I get to be direct…).

A recent experience iterated to me the fact that consumers are not that stupid, but instead have multiple, valid considerations that need to be taken into account.

On Black Friday 2019, Hyundai USA was trying to offload its remaining stock of 2019 model Hyundai Ioniq EV, in anticipation of the upgraded 2020 model launch. The Ionic was offered with a low down-payment and a $119/m lease. If you are a daily commuter, chances are you spend much more than $119/m on gas, so effectively you’d be getting a new car for free or better (assuming your old junk heap was worth a couple thousand of dollars). I turned to my ex-wife and suggested this great deal as a way to save money and upgrade her car. Her response “a limited-range EV cannot be a primary family car“.

Now, this “limited range” EV goes about 124 miles on a charge, which is roughly three times the daily mileage she drives on 99% of days. So what’s the problem? The problem is that once-a-season  (or really once-a-year) family road trip. You know, that road trip where you’re on the road five, six, maybe ten hours a day.  Going from Silicon Valley to Los Angeles, from San Francisco to Lake Tahoe, from Dallas to South Padre Island or from New York City  to Cape Cod. “If we can’t do that, it’s not really a usable car.

Now, setting aside the fact that you can easily rent an ICE car a few days a year and still come out ahead financially, what the consumer in this case is saying is that Range is a key factor, and that  “If I can drive it, I want it to be able to go the distance.

Range anxiety is THE key factor in EV rejection.

Now consider driving range. Traditional cars are virtually unlimited in range because there is an established re-charging infrastructure (i.e. gas stations), but also because the average range for a car on a full gas tank is about 400 miles. How was this number selected? Why isn’t it half, or for that matter – why isn’t it double? The answer is simply that this number is based on human physiology. After you’ve driven 300 – 400 miles,  probably over 4-6 hours, you definitely need a break – to rest, recharge (in this case with food & drink), and probably get rid of some bi-products of your previous recharge. So stopping every 400 miles or less for a meaningful amount of time (30 minutes or more) is virtually guaranteed. In fact, you need it even as a passenger.

Now, regardless of whether these numbers were picked after some rigorous study or just emerged as a best practice, they define consumer expectations and underlie many road trip plans. Give me an EV that I can drive for 5 hours and then stop for 30 minutes to recharge, and I don’t need a gas or hybrid car. Given the non-linearity of charging, if battery capacity equates to a highway range of ~400 miles and I can get 80% charge in 30 minutes, and assuming there’s a charging station –  you hit the numbers consumers need.

Consumers want an EV that can go for 5 hours and then stop for 30 minutes to recharge.

So how far are we from that point, assuming  the dimensions and price of the battery need to ultimately be such that the EV is priced similarly to the gas-powered car (or a little higher, given the lower energy & maintenance costs)?

Examine the Tesla numbers as measured by Teslike here. We are at about 200 miles for a 30-minute charge for a car that costs probably 50-100% more than average Joe would want to pay. Assuming lower margins (not everyone has to be Tesla) and economies of scale can reduce that premium by half, we’re around the price point needed. All we need now is to…  double battery capacity.

Double battery capacity without increasing charge time – and EV sales will skyrocket.

Unfortunately, there is no Moore’s Law for batteries and it’s not an 18-month wait. But the incredible aggregate opportunity that EVs, drones, renewable energy grids and IoT represent create a huge financial opportunity for breakthrough battery technology companies. Companies that can push the envelope towards the 2X goal, especially if they can do so without the need to retrofit entire factories (or gigafactories) for new architectures / chemistry / production methods stand to be the fulcrums on which a whole industry could turn.

To quote Forbes’ John Frazer, “Batteries are the new oil“, and the companies that will upgrade them by 2X will herald the electric future – and mint the new oil barons.



Recent Publications

Given I now split my writing between this blog, Medium, Linkedin and branded publications, here’s a list of links to things published elsewhere:

Why (Most) Bots And Voice Assistants Are Dumb…

What’s wrong with the state-of-the-art and why Dialog Management is the missing layer.

How Personal Assistant AI Works in 7 Minutes

The basic building blocks of products like Siri, Alexa, Cortana and Chris, as well as messenger bots.

Intelligent Agents Will Trump Bots

About going from single-use bots to long-term engagement with intelligent agents.

Messaging, Bots and Corporate Travel: Notes From The Beat Live

Where bots and AI meet the needs of corporate travelers and the TMCs serving them.

iMessage integration in iOS 10, the new user experiences it enables – and how they are superior to the current state of Facebook’s messaging apps.

Mobile Platforms, User Experience

What I Hate About The Apple Watch… and Why It Will Stay On My Wrist

The long and short of it is: I got one two months ago; I got it in order to better understand where this type of wearable device is going — what it enables that wasn’t possible before and how it will affect our digital life; Like many others I used to be a bit of a watch aficionado but let go of my watches many years ago when I realized my cellphone showed the time; I am an early adopter of sorts but not really a digital junkie; And I’ve been in mobile (professionally) for 16 years now, and have typically seen convergence in this market, not divergence;

Based on this experience — this is my critique as well as my insight (ahem) about where this is going.

So first of all — What is the Apple Watch? I don’t know what the people in Cupertino had in mind, but based on what they delivered, it is really several things.

  • A watch that shows time, date, temperature etc. — Ha!
  • A health / fitness wearable
  • A notification / messaging wearable
  • … and a little tiny iPhone strapped to your wrist, sort-of

The first two categories above are generally well understood by at least early adopter consumers. The latter are newer and the jury’s still out on their utility / desirability. Now if you’re going to build something that people understand, you better deliver what they expect. So here are pet peeves #1 and #2:

#1: Can I please see the time when I want to?

The Apple Watch’s display is off most of the time, to conserve battery. It uses the accelerometer and some background processing to figure out it’s being looked at by the wearer. This works pretty well if my arm is extended (e.g. I’m standing up), but fails much too often when my arm is in my lap or on a desk. This is (a) frustrating and leads to (b) me jiggling the watch all over the place to get the display on, which initially leads the other people in the room to assume I’ve developed a tic (or worse) and often ends up with the conversation sidelining to the Apple Watch (hmm…) but not in a good light. Incidentally this is especially nagging with Siri interaction, which is supposed to start with a similar hand gesture and saying “Hey Siri”. Often it will turn off the display while I’m still talking to Siri because it will decide I didn’t mean to speak, after all.

#2: The Heart Rate Monitor Really Sucks

Heart rate monitoring when I’m on the couch is kinda cool for extreme quantified-selfers. Most people want heart rate monitoring when they are really exercising. More often then not, you will find the iPhone Watch showing you some totally irrelevant measurement taken long ago. For instance look at this photo, taken on a stepper/elliptical at the height of my workout:

This happens at least half the time, and seems to be a software problem rather than a hardware one, because when there is actually a recent measurement, it seems to be very accurate:

These consistent software issues bring me to an overall point that goes beyond the obvious:

#3. A Smart-watch is required to be, well, Smart

All too often there is poor attention to context, and therefore either silly interaction or too much user interaction required. One example are the “stand up” alerts. In keeping with the health keeper approach, the watch will alert you to stand up every hour… even if you’re obviously in a car and moving at 60 mph. It allows you to record your activity, but despite the fact that it measures your heart rate, speed etc. everything is manual — it can’t tell that you’re on a bike (despite your moving at 15 mph with an elevated heart-rate), or that your treadmill session is long over (despite your heart rate dropping to 50 and you being 100% stationary). Integration with the Health app on the iPhone isn’t great either, for instance it will bug you about not exercising despite your entering a 60-minute swimming session in the app manually (and painstakingly).

#4: A New Computing Paradigm Needs a New UX Paradigm

Moving beyond the basics of a watch-cum-activity-tracker to a new breed of computing device, Apple’s approach to delivering value revolves around snippets of information that are typically pushed to the end user. The combination of Notifications (straight out of the iOS Push mechanism) and Glances (a tiny-screen take on app widgets) alongside haptic alerts is supposed to provide a better medium for humans to remain hyper-connected without having to constantly stare at a “big” iPhone screen. In theory, that should allow people to be more intune with their surroundings and the people with them. In practice, it requires the emergence of new user experience practices.

It took years for desktop / web UX designers to master mobile UX, moving from “let’s cram the desktop experience onto a small screen” (and discovering no one wants to use it), to the current-day focus on what’s relevant and usable in a mobile app. Moving from iPhone apps to Watch glances / notifications will require a lot of trial and error before best practices emerge. We are in the early days where many apps are merely frustrating (e.g. Facebook Messenger — I can receive a message but the only response I can send is thumbs up). This is a topic that probably justifies a separate post. Let’s just say that currently some apps are useful, many are just there because management said “we must have an Apple Watch app when it launches” and product managers / designers let their inner mediocretin shine (hey I just invented a new word!).

The incredible useless Lumosity App

Another under-delivering technology at this stage are haptic alerts (taptics). Having the device strapped to your wrist makes vibrations a great way to draw your attention. But frankly I was hoping to be able to get more than a binary “Yo”. Case in point — navigation. I ride a motorcycle and I was really hoping that I could use Apple Maps navigation as a gentle “GPS on your wrist” that I could use without looking at / listening to. But for the love of me I can’t figure out when it says “go left” (three taps?) and when it says “go right” (a series of angry buzzes?).

So Why Can’t I Leave Home Without It?

In truth, this is hard for me to qualify, but three weeks into the experience I found myself leaving home without it one day and feeling, well, naked.

For one, the Apple Watch grows on you. You get used to be able to getting the time without getting out your phone, Siri-on-your-wrist makes a lot of sense (especially in the car), etc. etc.

Maybe even more salient is how lazy we are. I found myself preferring to check some info on the watch rather than on the phone because the watch was strapped to my wrist, whereas the phone was all the way on the other end of the coffee table, requiring the considerable effort of stretching out, reaching over and clicking a button. This is not unlike the reason we all do email on the iPhone even at home, or in front of our desks, despite our perfectly good laptops being in the next room or even right in front of us.

And then there’s the eco-system. The Apple Watch is useful out-of-the-box, cause it syncs with your iPhone, iPad etc. And while a lot about that eco-system is imperfect from a software perspective, it’s still the most complete one out there. Which makes things even more convenient by saving you the hassle of loading it up with stuff, setting stuff up etc. Did I mention people don’t like hassle?

So while the current Apple Watch is definitely a version 1, and while Apples software people (mostly) have a lot of work to do, if there’s one thing I learned about consumer tech over the last 15 years it is that if something new is more convenient for people, then (most) other things being equal, they will easily get used to it and not be able to go back to the old ways. The Apple Watch makes some things more convenient and accessible, and as some of these are already things we do habitually, I believe it is here to stay.

Mobile Platforms, User Experience

Cortana Opens Up Where Siri Remains a Recluse

A Big Step Forward that Leaves Much To Be Desired Cortana in Halo 4

Given Apple’s and Google’s dominance, not many of us follow Microsoft news anymore. But instead of coming apart at the seams, it looks like Microsoft is adopting the only credible strategy – trying to out-innovate its competition to the point where it becomes a leader again. Signs of success are visible with Azure becoming the most credible competition to AWS, and it seems like some if its artificial intelligence efforts are just as ambitious. Against that backdrop, the recent Cortana / Windows Speech Platform developments are steps in the right direction.

App vs. Platform

Back in September 2013 ahead of the iPhone 5 / iOS 6 launch we were trying to predict Apple’s next move. Siri launched a year earlier on the iPhone 4S, and our wager at the time (at Desti / SRI) was that iOS 6 will open Siri-as-a-platform, allowing application developers to tie their offerings into the speech-driven UX paradigm, bringing speech interaction to a critical mass. Guess what – 18 months later, Siri is still a limited, closed service, and even Google Now API is still a rumor. So Microsoft’s announcements last week is a breath of fresh air and potentially a strategic move. In a nutshell – here are the main points of what was announced (and here’s a link to the lecture at //build/):

  • Cortana available on all Windows platforms
  • 3rd party apps can extend Cortana by “registering” to respond to requests, e.g. “Tell my group on Slack that we will meet 30 minutes later”
  • Requests can be handled in the app, or the app can interact using Cortana’s dialog UI

Extending Cortana to Windows 10 is an important step towards making voice interaction with computers mainstream. Making Cortana pluggable turns it into a platform that can hope to be pervasive through a network effect. However – what was announced leaves much to be desired with regards to both platform strategy and platform capabilities.

Cortana API: Speech without Natural Language is an Unfinished Bridge

I’m a frequent user of Siri. There are simply many situations where immediate, hands-free action is the quickest / safest way to get some help or to record some information. One of Siri’s biggest issues in such situations is its linear behavior – once it goes down a path, its very hard to correct and go down another. Consider for instance searching for gas stations while you’re driving down a highway – you get a list of stations and then it kind of cycles through them by order of distance (which is not very helpful if you’ve already past something). But going back and forth in that list (“show me the previous one”) or adding something to your intent (“show me the one near the airport”) is impossible. So often you end up going back to tapping and typing. That’s where a more powerful natural-language-understanding platform is needed, e.g. SRI’s VPA, or potentially (now owned by Facebook) or Cortana’s API allows you to create rudimentary grammars where you more-or-less need to literally specify exactly the sentences your app should understand, with rudimentary capabilities to describe sentence templates. There is no real notion of synonyms, pursuing intent completion (i.e. “filling all the mandatory fields in the form”), going back to change something etc. So this is more or less an IVR-specification platform, and we all know how we love IVRs, right?  If you want to do more – the app can get the text and “parse it” itself. That means that every app developer that wants to go beyond the IVR model needs to be learn how to build a natural-language-understanding system. That’s not how platforms work, and will not support the proliferation of this mode of interaction – crucial for making Cortana a strategic asset. Now arguably you could say – well, maybe they never saw it as a strategic asset, maybe they were just towing the line set by Apple and Google. That, however, would be a missed opportunity.

Speech-enabling Things is a Credible Platform Strategy

The Internet of Things is coming, and it is going to be an all-encompassing experience – after all, we are surrounded by things. For many reasons, these things will not all come from the same company. A company that will own a meaningful part of the experience of these things and make them dependent on its platform – for UI, for personal data, for connectivity etc. – that company would own the user experience for so much of the user’s world. In other words – give these device makers a standardized, integrated interaction platform for their devices and you own billions of consumers’ lives. Cortana in the clowd can be a (front-end to) a platform that 3rd party developers can use to speech-enable interactions with devices – whether they make the devices (e.g. the wearable camera that needs to upload images taken) or the experiences that use them (e.g. activating Pandora on your wireless speaker). Give these app / device developers a way to create this experience and connect it to the user’s personal profile (that he/she already accesses through their laptop, smartphone, tablet etc.) and you become the glue that holds the world together. This type of software-driven platform play is exactly the strategy Microsoft’s excelled at for so many years. To be an element of such a strategy, Cortana needs to be a cloud service. Not just a service available across Windows devices, but a cloud-based platform-as-a-service that can integrate with non-Windows Things. That can be part of a wider strategy of IoT-focused platform-as-a-service (for instance – connecting your things to your personal profile, so they can recognize you and interact in a personalized context), but mostly it needs to be Damn Good. Cause Google is coming. Building a platform ecosystem and then sucking it for all its worth used to be Microsoft’s forte. Cortana in the cloud, as a strong NLU and speech platform could be an important element of its comeback strategy.


Google Glass from the Subject’s Perspective

Last week I had the honor and pleasure of being the first ever subject of a press interview conducted using Google Glass – followed up by a very interesting discussion with Robert Scoble. Here are some of the insights we’ve discussed, as well as some subsequent insights.

Screen Shot 2013-04-26 at 5.18.52 PM

Photography and Video will be impacted First

Consider how phone-based cameras have changed photography. My eldest daughter is almost 9 years old. We have a few hundred images of her first year, and about 10 short videos. My son is now 18 months old, and as my wife was preparing his first scrapbook album last week, she’s browsed through several thousand digital photos. On my phone alone, I have dozens of video clips of him doing everything you can imagine a baby doing and some things you probably shouldn’t. The reason is simple – we had our smartphones with us, they take good photos and store them. And should I mention Instagram?

Google Glass takes this to the extreme. With your smartphone you actually have to reach for your pocket / bag, click the phone app, point and shoot. Google Glass is always there, immediately available, always focused on your subject, and hands-free. Video photography through Google Glass is vastly superior for the simple reason that your head is the most stable organ in your body. What all of this comes down to is simply that people will be shooting stills and video all the time. Have you seen those great GoPro clips? Now consider having a GoPro camera on you, ready and available – perpetually. There will not just be a whole new influx of images and video but new applications for these too. Think Google StreetMaps everywhere, because the mere fact a human looked somewhere, means it’s recorded in some server. In the forest, in your house, and in your bathroom. Not sure about the latter? Check out Scoble’s latest adventures…

Useful Augmented Reality – Less will be more

Having information overlaid on top of your worldview is probably the sexiest feature from the perspective of us geeks. The promise of Terminator-vision / fighter-pilot displays provides an instant rush of blood to the head. And surely overlaying all of the great Google Places info on places, Facebook (well – Google+) info on people, and Google Goggles info on things – will be awesome, right?

Well, my perspective is a little different. After the initial wow effect, most of these will be unwanted distractions. Simply put – too many signals become noise, especially when it’s human perception that is concerned. This lesson has already been learned with similar systems in aerospace settings – and there the user is a carefully selected, highly trained individual, not an average consumer.

The art and science will be figuring out which of the hundreds of subjects visible is actually of interesting enough to “augment”. This will require not just much better and faster computer vision (hard!) but much better and deeper understanding of these subjects – which one’s really special for me, given the context of what I’m doing, what makes it so, and when to actually highlight it. Give me too much signal and I will simply tune out, or simply – take the damn thing off.

Achieving this requires a deeper understanding both of the world and of the individual. Deeper, more detailed POI databases (for places), product databases (for objects), and more contextual information about the people around me, what their contexts are – and what is mine. It is almost surprising to what degree this capability is non-existent today.

Initially – Vertical Applications Will be Key

Consider the discussion of video photography above. Now put Google Glasses on every policeman and consider the utility of simply recording every interaction these people have with the public. Put Google Glasses on every trainee driver and have them de-brief using the recorded video. Or just take it with you to your next classroom. Trivial capabilities like being able to tag an interesting point in time and immediately go back to it when you re-play – how useful is that?

And considering augmented reality – think of simple logistic applications, like searching a warehouse, where the objects are tagged with some kind of QR code, and a simple scan with your eyes allows you to get a visual cue where they are. The simple applications will deliver immense value, drive adoption, experience, and through those – curiosity and new, further reaching ideas.

And if you stuck around this long – here are my most amazing revelation:

  • Wearing Google Glass grows your facial hair!


Sergey Brin Google Glass       Scoble Google Glass         Tim Google Glass

  • Google Glass vide makes you photogenic – watch Scoble’s interview of me and compare to my usual ugliness…

The Case for Siri

Since Siri’s public debut as a key iPhone feature 18 months ago, I keep getting involved in conversations (read: heated arguments) with friends and colleagues, debating whether Siri is the 2nd coming or the reason Apple stock lost 30%. I figure it’d be more efficient to just write some of this stuff down…

siri icon

Due Disclosure:

I run Desti, an SRI International spin-out that is utilizes post-Siri technology. However, despite some catchy headlines, Desti is not “Siri for Travel”, nor do I have any vested interest in Siri’s success. What Desti is, however, is the world’s most awesome semantic search engine for travel, and that does provide me some perspective on the technology.

Oh, and by the way, I confess, I’m a Siri addict.

Siri is great. Honest.

The combination of being very busy and very forgetful, means there are at least 20 important things that go through my mind every day and get lost. Not forever – just enough to stump me a few days later.  Having an assistant at my fingertips that allows me to do some things – typically set a reminder, or send an immediate message to someone – makes a huge difference in my productivity. The typical use-case for me is driving or walking, realizing there is something I forgot, or thinking up a great new idea and knowing that I will forget all about it by the time I reach my destination. These are linear use cases, where the action only has a few steps (e.g. set a reminder, with given text, at a given time) and Siri’s advantage is simply that it allows me to manipulate my iPhone immediately, hands-free, and complete the action in seconds. I also use Siri for local search, web search and driving directions.

Voice command on steroids – is that all it is?

Frankly – yes. When Siri made its public debut as an independent company, it was integrated with many 3rd party services that were scrapped and replaced with deep integration with the iPhone platform when Apple re-launched it. Despite my deep frustration with Siri not booking hotels these days, for instance (not), I think the decision to do one thing really well – provide a hands-free interface to core smartphone functionality (we used to call it PIM, back in the days), was the right way to go. Done well, and marketed well, this makes the smartphone a much stronger tool.

But I hate Siri. It doesn’t understand Scottish and it doesn’t tell John Malkovich good jokes

As mentioned, I’ve run into a lot of Siri-bashers in the last year. Generally they break down into two groups. The people who say Siri never understands them, and the people who say Siri is stupid. I’m going to discuss the speech recognition story in a minute (SRI spin-out, right?) but regarding the latter point I have to say two things. First, most people don’t really know what the “right” use-cases for Siri are. Somewhere between questionable marketing decisions and too little built-in tutorial, I find that people’s expectations of Siri are often closer to a “talking replacement for Google, Wikipedia and the bible” than to what Siri really is. That is a shame; because the bottom line is that it is under-appreciated by many people who could really put it to good use. Apple marketing is great, but it’s better at drawing a grand vision than it is at explaining specific features (did I mention my loss on my AAPL?). While the Siri team has done great work at giving Siri a character, at the end of the day it should be a tool, not an entertainment app (my 8-year old daughter begs to differ, though).

OK, but it still doesn’t understand ME

First, let me explain what Siri is. Siri is NOT voice-recognition software. Apple licenses this capability from Nuance. Siri is a system that takes voice recognition output – “natural language”, figures out what the intent is – e.g send an email, then goes through a certain conversational workflow to collect the info needed to complete that intent. Natural language understanding is a hard problem, and weaving multiple possible intents with all the possible different flows is complex. It is hard because there is a multitude of ways for people to express the same intent, and errors in the speech recognition add complexity. Siri is the first such system to do it well and certainly the first one to do it well on such a massive scale.

So what? If it doesn’t understand what I said, it doesn’t help me.

That is absolutely true. If speech is not recognized – garbage in, garbage out. Personally I find that despite my accent Siri usually works well for me, unless I’m expressing foreign names, or there is significant ambient noise (unfortunately, we don’t all drive Teslas). There are however some design flaws that do seem to repeat themselves.

In order to improve the success rate of the automatic speech recognizer (ASR), Siri seems to communicate your address book to it. So names that appear in your address book are likely to be understood, despite the fact they may be very rare words in general. However this is often overdone, and these names start dominating the ASR output. One problem seems to be that Nuance uses the first and last names as separate words, so every so often I will get “I do not know who Norman Gordon is” because I have a Norman Winarsky and a Noam Gordon as contacts. I believe I see a similar flaw when words from one possible intent’s domain (e.g. sending an email) are recognized mistakenly when Siri already knows I’m doing something else (e.g. looking at movie listings).

This probably says something about the integration between the Nuance ASR and Apple’s Siri software. It looks like there is off-line integration – as in transferring my contacts’ names a-priori, but no real-time integration – in this case Siri telling the ASR that “Norman Gordon” is not a likely result. Such integration between the ASR and the natural language understanding software is possible, but often complex not just for technical reasons but for organizational reasons. It requires very close integration that is hard to achieve between separate companies.

So when will it get better?

It will get better. Because it has to. Speech control is here to stay – in smartphones as well as TVs, cars and most other consumer electronics. ASRs are getting better, mostly for one reason. ASRs are trained by listening to people. The biggest hurdle is how much training data they have. In the early days of ASRs, decades ago, this consisted of “listening” to news commentators – people with perfect diction and accent, in a perfect environment. In the last year, more speech sample data was collected through apps like Siri then probably in the two decades prior, and this data is (can be?) tagged with location, context and user information, and is being fed back into these systems to train them. And as this explanation was borrowed from Adam Cheyer, Siri’s co-Founder and formerly Siri’s Engineering Director at Apple – you better believe it. We are nearing an inflection point, where great speech recognition is as pervasive as internet access.

So will Siri then do everything?

That’s actually not something I believe will happen as such. Siri is a user interface platform that has been integrated with key phone features and several web services. But to assume it will be the front-end to everything is almost analogous to assuming Apple will write all of the iOS apps. That is clearly not the case.

However – Siri as a gateway to 3rd party apps, as an API that allows other apps that need the hands-free, speech-driven UI to integrate into this user interface, could be really revolutionary. Granted – app developers will have to learn a few new tricks, like managing ontologies, resolving ambiguity, and generally designing natural language user experiences. Apple will need to build methodology and instruct iOS developers, and frankly this is a tad more complex than putting UI elements on the screen. Also I have no idea whether Siri was built as a platform this way, and can dynamically manage new intents, plugging them in and out as apps are installed or removed. But when it does, it enables a world where Siri can learn to do anything – and each thing it “learns”, it learns from a company that excels at doing it, because that is that third party’s core business.

… and then, maybe, a great jammy dodger bakery chain can solve the wee problem with Scotland with a Siri-enabled app.

Oh, and by the way – you can learn more about Siri, speech, semantic stuff and AI in general at my upcoming SXSW 2013 Panel – How AI is improving User Experiences. So come on, it will be fun.