Artificial Intelligence (AI) has definitely hit peak hype… it’s being written about an awful lot – even in design circles.
And yes, there are new tools and new capabilities that are sure to impact us designers in the not too far future – see Yuri Vetrov’s article on algorithm driven design for a solid write up. Nevertheless, I think something important has been missed; the user in ‘User Experience’, may soon have a rather different role to play. And that means what we do as designers will have to change in profound ways.
Not so long ago I was driving to my sister’s place and it occurred to me that, while I’d been there many times before, I still didn’t know the way. I’d never bothered to learn the route. Weird… no?
At that moment it occurred to me that my navigation and driving experience had changed in a fundamental way, presaging what is likely to cause a sea change in the way humanity engages with Digital.
Oh, how things have changed
Why hadn’t I learned the route? Well, every time I go I tend to take a slightly different route under the firm and unflappable guidance of Google Maps’ turn by turn navigation. I now have no idea which route I’m about to use, since each time I go the route changes based on traffic conditions, roadworks, and probably countless other tiny factors. The algorithm isn’t limited by our hunter gatherer’s cognitive architecture – it can consider countless factors and work through every last detail. The algorithm optimises for speed, not my desire to learn a route.
I may be the driver, but in some ways I’m no longer in the driving seat.
So, let’s take a step back. What has happened here? What is about to happen everywhere else, once AI sets in? Don’t you doubt it, narrow AI will soon be making itself unnervingly comfortable in Medicine, Logistics, Law, Accounting, Finance and, well, just about any domain where better decisions can be made by leveraging the ever increasing torrent of relevant data. See my colleague Rory’s thoughts on Future of work: AI used as an Intelligent Assistant.
The role of the user has changed. In digital experiences not powered by AI – a No-I UI, to coin a phrase – the software’s role is to automate processes and calculations, the human’s role is to supply inputs (e.g. in the case of navigation, the start and end starting point), make decisions (e.g. which route to take), and consume outputs (e.g. the map visual, and spoken directions).
In an Artificial Intelligence powered experience (an AI UI) things change. Taken to its extreme then it’s an experience in which we no longer make the decisions – the AI takes care of that. We are relegated to supplying inputs and consuming outputs we may not be able to truly understand or challenge. I don’t really understand what went into Google’s decision to take me down some obscure residential road, I just take it on faith that it’ll get me there faster in the end. I used to think I knew better but I now generally admit defeat. How can I win versus the rivers of data streaming into Google’s cloud?
To put a somewhat more empowering spin on this, let’s take a look at the new world of chess. It’s old news now that computers are scarily good at chess. Nobody, no matter how amazingly good they are, can beat a decent computer chess program.
Or can they?
Well, what most folk don’t know is that right now the best chess players on the planet are Centaur players. Even the very best computer can’t beat the very best Centaur. And what’s a Centaur? A human + a computer = a chess player who uses an AI to augment their game.
Technically this is just the latest in a long history of what’s called Augmented Cognition – tools that we’ve invented to make us smarter:
- Pen and paper augmented our memory
- Mathematical notation augments our ability to calculate
- Graphs and charts augments our ability to absorb and reason about data
There’s a great book on all this by Edwin Hutchins. I recommend it 😉
What, you might ask, are the implications for us UX/UI/Product/Service designers?
Well, it’s early days so I’m not sure if anyone has really worked it all out yet. Here are my tentative suggestions:
Empowering input experiences
We need to get really good at designing the experiences in which users give AIs the inputs they need: the precise goal, the input data set, etc.
Consider a modern day call centre agent dealing with a mortgage application – they key-in the client’s financials etc., press the submit button, and…
COMPUTER SAYS: [NO]
The result – a disempowered, uninformed, and demotivated call centre agent delivering a Kafka-esk customer experience.
AI UIs will have to be careful to create a sense of empowerment. Sensitively handling the fact that the AI has wrested the decision making away from the user, while celebrating the user’s role as its master. The process of supplying the software with inputs needs to feel powerful, dynamic, important. Not, as is so often the case, perfunctory and dull.
But there is more to it than that.
As Nick Bostrom, the doomsaying philosopher, pointed out, AIs can be very sensitive to the goals you give them. Things could go terribly, terribly wrong if a powerful AI was given the wrong goals. But that’s another story. Anyway, as the complexity of the tasks we give our AIs grows, the job of defining the right goal, asking the right question, choosing the right parameters, will become ever more critical. These input experiences will become ever more interwoven with the way in which AI UIs convey their outputs. Users will want to rapidly and fluidly iterate through different inputs choices as they receive the consequent outputs. For complex work, the skill of an empowered Centaur will be in how well they can navigate this interplay between input and outputs.
Straightforward and Honest
AI is largely driven by statistics rather than rules. Rules don’t scale very well, but when you are drowning in data and processing power, statistics do. The thing about statistics is that it doesn’t tend to produce just one answer, but a whole series of answers, each scored with a probability of some kind. So, to extend our driving example, you might have the probability of each candidate route being the fastest, the safest, the most scenic, etc. The underlying output data will often be terrifyingly complex in itself.
It’s up to the designer to take this complexity and create a simple, straightforward, yet honest output experience. It should be straightforward, by just showing the user what they need to know. Giving them choices that might make a material difference – “There is a very scenic route available, do you still want to take the fastest route?” However, it should also be honest and convey doubt. We sometimes get frustrated with current AI UIs when their recommendations go wrong, sometimes literally leading us up a blind alley. Under the hood, these AIs don’t deal in certainty. We, as designers, need to find better ways of conveying that uncertainty to users.
Output experiences should also grow sensitively over time, being respectful and trustful.
If AI is to be taking the driving seat, there is every possibility that users will feel disempowered, disenfranchised, or disintermediated.
We need to identify ways to create experiences that are sensitive to what people feel they are good at. If a capability is to take on the decision making in a process, then perhaps designers need to look for ways to incrementally demonstrate its worth on ever larger pieces of the puzzle – to gradually ease the user into being a Centaur. We must create experiences sensitive to people’s egos, that are mindful of what we are taking away and the changes we are asking people to make to what they do.
Some folk like to think they know best (☝️, yup, guilty).
If some guy turns up and repeatedly shows you up in front of your friends you’re not going to take kindly to that son of a ****. The same goes for software. App deleted. Flaming review penned.
Beware of your user’s fragile ego. Let them down gently, let them have their way. As it is in service businesses, so it should be with AI UI design – the customer is always right (ish).
AI is smart. It can be incredibly good at anticipating our needs. But it’s not perfect. It does get it wrong.
Our experiences should be designed with this in mind – they should allow users to easily pick up the mistakes AIs make, course correct, and allow them to teach the algorithms, so that they can get it right next time.
If a piece of software is crunching gigabytes of data, factoring in countless parameters, and popping out a wonderfully optimised result, how do we make sure the user actually accepts it? To the algorithm designer, it’s all well and good to say “it’s the optimum solution, you can’t hope to understand why it is, it just is,” but then he understands the statistical complexities of neural networks and such… users don’t. Users need to build trust in the thing, before they accept its outputs.
How do you build trust? Well, there’s literature on this aimed at an older generation of digital experiences and is full of insights.
A tried and trusted approach has been to make the outputs, recommendations in this case, human understandable; the classic Amazon trope of “People who read XYZ bought ABC”. However, I suspect this drastically limits the magic that your AI wizards can conjure up. A wealth of more subtle techniques that leverage tone of voice, the gradual sequencing of experiences and rewards to steadily build trust, promise to be much more powerful.
To sum it all up
In a future where designers are helping to create an ever more computerised and machine-driven world, I feel we will need to be ever more humane in our craft as designers. We will need to think deeply about how the experiences we ship will impact our users and design them to create proud, empowered centaurs, rather than disempowered de-skilled, and beaten down worker-drones.
In the age of shrink wrapped software, UX was driven by a focus on efficiency and learnability. Then with the web came a focus on conversion and stickiness. Perhaps with AI, paradoxically, it will be a focus on humane-ness.