Uncategorized

The Real Tipping Point For Electric Cars: A Consumer’s View

Electrification is an inevitable trend. Whether driven by the need to avert climate change, governments wanting to boost a future industry, or the many advantages of electric propulsion – simplicity, cheap maintenance, torque… With autonomy generally accepted as something  that will take much  longer to deliver at scale, the automotive industry’s focus is turning back to Electrification as the disruptive wave that you either ride in or be drowned by. But when is it coming? And what will be the inflection point?

Electrification is THE disruptive theme in the automotive industry in the coming decade.

cybertruck

Despite all the Tesla press and the announcements by leading automakers that they are switching focus to electric vehicles, for instance VW’s announcement last month and Daimler’s farewell to IC engines earlier this year, the fact remains that EV sales represent small single-digits of the US vehicle market. According to EV Adoption less than 2% of sales in 2018, growing 50% YoY. While this rate of growth is significant, we are still looking at decades before internal combustion cars are eliminated. So where is the “iPhone moment” for this industry, and what will bring us there?

A prevailing opinion among industry experts is that it’s all about price. Get EVs to price-parity with traditional gas-powered cars and people will choose them. Bain & Co. in a recent article see not one but two distinct tipping points for EV adoption – one when “total cost of ownership” of an EV is below that of an ICE-powered car, and another when the purchase price drops below the purchase price of one. As we’re already past that  first point according to their study (gas and maintenance cost for EVs are already significantly lower, reducing TCO over a car’s lifetime), a blunt way to express their point is – “consumers are too stupid to realize that EVs cost less to own, so we will have to wait until EVs also cost less to buy.” (not being employed by Bain, I get to be direct…).

A recent experience iterated to me the fact that consumers are not that stupid, but instead have multiple, valid considerations that need to be taken into account.

On Black Friday 2019, Hyundai USA was trying to offload its remaining stock of 2019 model Hyundai Ioniq EV, in anticipation of the upgraded 2020 model launch. The Ionic was offered with a low down-payment and a $119/m lease. If you are a daily commuter, chances are you spend much more than $119/m on gas, so effectively you’d be getting a new car for free or better (assuming your old junk heap was worth a couple thousand of dollars). I turned to my ex-wife and suggested this great deal as a way to save money and upgrade her car. Her response “a limited-range EV cannot be a primary family car“.

Now, this “limited range” EV goes about 124 miles on a charge, which is roughly three times the daily mileage she drives on 99% of days. So what’s the problem? The problem is that once-a-season  (or really once-a-year) family road trip. You know, that road trip where you’re on the road five, six, maybe ten hours a day.  Going from Silicon Valley to Los Angeles, from San Francisco to Lake Tahoe, from Dallas to South Padre Island or from New York City  to Cape Cod. “If we can’t do that, it’s not really a usable car.

Now, setting aside the fact that you can easily rent an ICE car a few days a year and still come out ahead financially, what the consumer in this case is saying is that Range is a key factor, and that  “If I can drive it, I want it to be able to go the distance.

Range anxiety is THE key factor in EV rejection.

Now consider driving range. Traditional cars are virtually unlimited in range because there is an established re-charging infrastructure (i.e. gas stations), but also because the average range for a car on a full gas tank is about 400 miles. How was this number selected? Why isn’t it half, or for that matter – why isn’t it double? The answer is simply that this number is based on human physiology. After you’ve driven 300 – 400 miles,  probably over 4-6 hours, you definitely need a break – to rest, recharge (in this case with food & drink), and probably get rid of some bi-products of your previous recharge. So stopping every 400 miles or less for a meaningful amount of time (30 minutes or more) is virtually guaranteed. In fact, you need it even as a passenger.

Now, regardless of whether these numbers were picked after some rigorous study or just emerged as a best practice, they define consumer expectations and underlie many road trip plans. Give me an EV that I can drive for 5 hours and then stop for 30 minutes to recharge, and I don’t need a gas or hybrid car. Given the non-linearity of charging, if battery capacity equates to a highway range of ~400 miles and I can get 80% charge in 30 minutes, and assuming there’s a charging station –  you hit the numbers consumers need.

Consumers want an EV that can go for 5 hours and then stop for 30 minutes to recharge.

So how far are we from that point, assuming  the dimensions and price of the battery need to ultimately be such that the EV is priced similarly to the gas-powered car (or a little higher, given the lower energy & maintenance costs)?

Examine the Tesla numbers as measured by Teslike here. We are at about 200 miles for a 30-minute charge for a car that costs probably 50-100% more than average Joe would want to pay. Assuming lower margins (not everyone has to be Tesla) and economies of scale can reduce that premium by half, we’re around the price point needed. All we need now is to…  double battery capacity.

Double battery capacity without increasing charge time – and EV sales will skyrocket.

Unfortunately, there is no Moore’s Law for batteries and it’s not an 18-month wait. But the incredible aggregate opportunity that EVs, drones, renewable energy grids and IoT represent create a huge financial opportunity for breakthrough battery technology companies. Companies that can push the envelope towards the 2X goal, especially if they can do so without the need to retrofit entire factories (or gigafactories) for new architectures / chemistry / production methods stand to be the fulcrums on which a whole industry could turn.

To quote Forbes’ John Frazer, “Batteries are the new oil“, and the companies that will upgrade them by 2X will herald the electric future – and mint the new oil barons.

 

Mobility / Automotive

Personal Mobility Devices Are The Real Disruption in Urban Mobility

A treatise on how electric scooters/bikes/boards are going through an iPhone moment, and why it matters:

… Published on LinkedIn here: https://www.linkedin.com/pulse/forget-autonomous-cars-personal-mobility-devices-real-nadav-gur/

31310725_510662395998199_701425892387192832_n

 

Uncategorized

Recent Publications

Given I now split my writing between this blog, Medium, Linkedin and branded publications, here’s a list of links to things published elsewhere:

Why (Most) Bots And Voice Assistants Are Dumb…

What’s wrong with the state-of-the-art and why Dialog Management is the missing layer.

How Personal Assistant AI Works in 7 Minutes

The basic building blocks of products like Siri, Alexa, Cortana and Chris, as well as messenger bots.

Intelligent Agents Will Trump Bots

About going from single-use bots to long-term engagement with intelligent agents.

Messaging, Bots and Corporate Travel: Notes From The Beat Live

Where bots and AI meet the needs of corporate travelers and the TMCs serving them.

iMessage integration in iOS 10, the new user experiences it enables – and how they are superior to the current state of Facebook’s messaging apps.

Mobile Platforms, User Experience

What I Hate About The Apple Watch… and Why It Will Stay On My Wrist

The long and short of it is: I got one two months ago; I got it in order to better understand where this type of wearable device is going — what it enables that wasn’t possible before and how it will affect our digital life; Like many others I used to be a bit of a watch aficionado but let go of my watches many years ago when I realized my cellphone showed the time; I am an early adopter of sorts but not really a digital junkie; And I’ve been in mobile (professionally) for 16 years now, and have typically seen convergence in this market, not divergence;

Based on this experience — this is my critique as well as my insight (ahem) about where this is going.

So first of all — What is the Apple Watch? I don’t know what the people in Cupertino had in mind, but based on what they delivered, it is really several things.

  • A watch that shows time, date, temperature etc. — Ha!
  • A health / fitness wearable
  • A notification / messaging wearable
  • … and a little tiny iPhone strapped to your wrist, sort-of

The first two categories above are generally well understood by at least early adopter consumers. The latter are newer and the jury’s still out on their utility / desirability. Now if you’re going to build something that people understand, you better deliver what they expect. So here are pet peeves #1 and #2:

#1: Can I please see the time when I want to?

The Apple Watch’s display is off most of the time, to conserve battery. It uses the accelerometer and some background processing to figure out it’s being looked at by the wearer. This works pretty well if my arm is extended (e.g. I’m standing up), but fails much too often when my arm is in my lap or on a desk. This is (a) frustrating and leads to (b) me jiggling the watch all over the place to get the display on, which initially leads the other people in the room to assume I’ve developed a tic (or worse) and often ends up with the conversation sidelining to the Apple Watch (hmm…) but not in a good light. Incidentally this is especially nagging with Siri interaction, which is supposed to start with a similar hand gesture and saying “Hey Siri”. Often it will turn off the display while I’m still talking to Siri because it will decide I didn’t mean to speak, after all.

#2: The Heart Rate Monitor Really Sucks

Heart rate monitoring when I’m on the couch is kinda cool for extreme quantified-selfers. Most people want heart rate monitoring when they are really exercising. More often then not, you will find the iPhone Watch showing you some totally irrelevant measurement taken long ago. For instance look at this photo, taken on a stepper/elliptical at the height of my workout:

This happens at least half the time, and seems to be a software problem rather than a hardware one, because when there is actually a recent measurement, it seems to be very accurate:

These consistent software issues bring me to an overall point that goes beyond the obvious:

#3. A Smart-watch is required to be, well, Smart

All too often there is poor attention to context, and therefore either silly interaction or too much user interaction required. One example are the “stand up” alerts. In keeping with the health keeper approach, the watch will alert you to stand up every hour… even if you’re obviously in a car and moving at 60 mph. It allows you to record your activity, but despite the fact that it measures your heart rate, speed etc. everything is manual — it can’t tell that you’re on a bike (despite your moving at 15 mph with an elevated heart-rate), or that your treadmill session is long over (despite your heart rate dropping to 50 and you being 100% stationary). Integration with the Health app on the iPhone isn’t great either, for instance it will bug you about not exercising despite your entering a 60-minute swimming session in the app manually (and painstakingly).

#4: A New Computing Paradigm Needs a New UX Paradigm

Moving beyond the basics of a watch-cum-activity-tracker to a new breed of computing device, Apple’s approach to delivering value revolves around snippets of information that are typically pushed to the end user. The combination of Notifications (straight out of the iOS Push mechanism) and Glances (a tiny-screen take on app widgets) alongside haptic alerts is supposed to provide a better medium for humans to remain hyper-connected without having to constantly stare at a “big” iPhone screen. In theory, that should allow people to be more intune with their surroundings and the people with them. In practice, it requires the emergence of new user experience practices.

It took years for desktop / web UX designers to master mobile UX, moving from “let’s cram the desktop experience onto a small screen” (and discovering no one wants to use it), to the current-day focus on what’s relevant and usable in a mobile app. Moving from iPhone apps to Watch glances / notifications will require a lot of trial and error before best practices emerge. We are in the early days where many apps are merely frustrating (e.g. Facebook Messenger — I can receive a message but the only response I can send is thumbs up). This is a topic that probably justifies a separate post. Let’s just say that currently some apps are useful, many are just there because management said “we must have an Apple Watch app when it launches” and product managers / designers let their inner mediocretin shine (hey I just invented a new word!).

The incredible useless Lumosity App

Another under-delivering technology at this stage are haptic alerts (taptics). Having the device strapped to your wrist makes vibrations a great way to draw your attention. But frankly I was hoping to be able to get more than a binary “Yo”. Case in point — navigation. I ride a motorcycle and I was really hoping that I could use Apple Maps navigation as a gentle “GPS on your wrist” that I could use without looking at / listening to. But for the love of me I can’t figure out when it says “go left” (three taps?) and when it says “go right” (a series of angry buzzes?).

So Why Can’t I Leave Home Without It?

In truth, this is hard for me to qualify, but three weeks into the experience I found myself leaving home without it one day and feeling, well, naked.

For one, the Apple Watch grows on you. You get used to be able to getting the time without getting out your phone, Siri-on-your-wrist makes a lot of sense (especially in the car), etc. etc.

Maybe even more salient is how lazy we are. I found myself preferring to check some info on the watch rather than on the phone because the watch was strapped to my wrist, whereas the phone was all the way on the other end of the coffee table, requiring the considerable effort of stretching out, reaching over and clicking a button. This is not unlike the reason we all do email on the iPhone even at home, or in front of our desks, despite our perfectly good laptops being in the next room or even right in front of us.

And then there’s the eco-system. The Apple Watch is useful out-of-the-box, cause it syncs with your iPhone, iPad etc. And while a lot about that eco-system is imperfect from a software perspective, it’s still the most complete one out there. Which makes things even more convenient by saving you the hassle of loading it up with stuff, setting stuff up etc. Did I mention people don’t like hassle?

So while the current Apple Watch is definitely a version 1, and while Apples software people (mostly) have a lot of work to do, if there’s one thing I learned about consumer tech over the last 15 years it is that if something new is more convenient for people, then (most) other things being equal, they will easily get used to it and not be able to go back to the old ways. The Apple Watch makes some things more convenient and accessible, and as some of these are already things we do habitually, I believe it is here to stay.

Mobile Platforms, User Experience

Cortana Opens Up Where Siri Remains a Recluse

A Big Step Forward that Leaves Much To Be Desired Cortana in Halo 4

Given Apple’s and Google’s dominance, not many of us follow Microsoft news anymore. But instead of coming apart at the seams, it looks like Microsoft is adopting the only credible strategy – trying to out-innovate its competition to the point where it becomes a leader again. Signs of success are visible with Azure becoming the most credible competition to AWS, and it seems like some if its artificial intelligence efforts are just as ambitious. Against that backdrop, the recent Cortana / Windows Speech Platform developments are steps in the right direction.

App vs. Platform

Back in September 2013 ahead of the iPhone 5 / iOS 6 launch we were trying to predict Apple’s next move. Siri launched a year earlier on the iPhone 4S, and our wager at the time (at Desti / SRI) was that iOS 6 will open Siri-as-a-platform, allowing application developers to tie their offerings into the speech-driven UX paradigm, bringing speech interaction to a critical mass. Guess what – 18 months later, Siri is still a limited, closed service, and even Google Now API is still a rumor. So Microsoft’s announcements last week is a breath of fresh air and potentially a strategic move. In a nutshell – here are the main points of what was announced (and here’s a link to the lecture at //build/):

  • Cortana available on all Windows platforms
  • 3rd party apps can extend Cortana by “registering” to respond to requests, e.g. “Tell my group on Slack that we will meet 30 minutes later”
  • Requests can be handled in the app, or the app can interact using Cortana’s dialog UI

Extending Cortana to Windows 10 is an important step towards making voice interaction with computers mainstream. Making Cortana pluggable turns it into a platform that can hope to be pervasive through a network effect. However – what was announced leaves much to be desired with regards to both platform strategy and platform capabilities.

Cortana API: Speech without Natural Language is an Unfinished Bridge

I’m a frequent user of Siri. There are simply many situations where immediate, hands-free action is the quickest / safest way to get some help or to record some information. One of Siri’s biggest issues in such situations is its linear behavior – once it goes down a path, its very hard to correct and go down another. Consider for instance searching for gas stations while you’re driving down a highway – you get a list of stations and then it kind of cycles through them by order of distance (which is not very helpful if you’ve already past something). But going back and forth in that list (“show me the previous one”) or adding something to your intent (“show me the one near the airport”) is impossible. So often you end up going back to tapping and typing. That’s where a more powerful natural-language-understanding platform is needed, e.g. SRI’s VPA, or potentially wit.ai (now owned by Facebook) or api.ai. Cortana’s API allows you to create rudimentary grammars where you more-or-less need to literally specify exactly the sentences your app should understand, with rudimentary capabilities to describe sentence templates. There is no real notion of synonyms, pursuing intent completion (i.e. “filling all the mandatory fields in the form”), going back to change something etc. So this is more or less an IVR-specification platform, and we all know how we love IVRs, right?  If you want to do more – the app can get the text and “parse it” itself. That means that every app developer that wants to go beyond the IVR model needs to be learn how to build a natural-language-understanding system. That’s not how platforms work, and will not support the proliferation of this mode of interaction – crucial for making Cortana a strategic asset. Now arguably you could say – well, maybe they never saw it as a strategic asset, maybe they were just towing the line set by Apple and Google. That, however, would be a missed opportunity.

Speech-enabling Things is a Credible Platform Strategy

The Internet of Things is coming, and it is going to be an all-encompassing experience – after all, we are surrounded by things. For many reasons, these things will not all come from the same company. A company that will own a meaningful part of the experience of these things and make them dependent on its platform – for UI, for personal data, for connectivity etc. – that company would own the user experience for so much of the user’s world. In other words – give these device makers a standardized, integrated interaction platform for their devices and you own billions of consumers’ lives. Cortana in the clowd can be a (front-end to) a platform that 3rd party developers can use to speech-enable interactions with devices – whether they make the devices (e.g. the wearable camera that needs to upload images taken) or the experiences that use them (e.g. activating Pandora on your wireless speaker). Give these app / device developers a way to create this experience and connect it to the user’s personal profile (that he/she already accesses through their laptop, smartphone, tablet etc.) and you become the glue that holds the world together. This type of software-driven platform play is exactly the strategy Microsoft’s excelled at for so many years. To be an element of such a strategy, Cortana needs to be a cloud service. Not just a service available across Windows devices, but a cloud-based platform-as-a-service that can integrate with non-Windows Things. That can be part of a wider strategy of IoT-focused platform-as-a-service (for instance – connecting your things to your personal profile, so they can recognize you and interact in a personalized context), but mostly it needs to be Damn Good. Cause Google is coming. Building a platform ecosystem and then sucking it for all its worth used to be Microsoft’s forte. Cortana in the cloud, as a strong NLU and speech platform could be an important element of its comeback strategy.