Designing for trust with AI: Ruminations on a Google Design Breakfast

AI, Design, Experience, Interaction Design, UX

Happily, I had the honour of being invited to the first in a series of informal discussions by the Google Design team in their sparkling new Kings Cross offices.

Google’s, x100 strong, London design team want to start becoming more visible to the wider design community – and engaged in the wider design conversation.  A worthy aim and a direction that I hope more of the big, impactful design teams of this great city begin to take.

It was a great event! The conversation centred on the shiny (perhaps scary) newness of AI, and how to create trust in the experiences we are developing with it.  We had a panel of insightful speakers to drive the conversation:

  • Sarah Gold – A spritely yet incredibly astute and articulate designer. Not long out of university and already running, IF, her own agency
  • Rachel Coldicutt – Runs doteveryone a think tank focused on making our digital world a better place. She’s seems to be driving some important efforts in this space and had lots of wide ranging comments
  • Priya Prakash – An interaction designer and founder of D4SC: a London based urban innovation company
  • Tom Taylor – Chief Product Engineer at Co-op Digital, rebuilding the Co-op for the 21st century

Alongside two eloquent and sharp UX Directors from Google:

  • Jens Riegelsberger – Manages teams across Google core properties such as Search and Maps.
  • Matt Jones – Of Berg fame, who is now working in Google’s Research & Machine Intelligence division

I thought I’d bring together some of the threads of conversation that resonated most with me, and add a little of my own gloss on it all. I’m going to focus on the topic of trust, even though the conversation actually ranged quite widely, often straying interestingly away into familiar anxieties about the impact of AI on society.

What is trust?

Trust in digital is an old and much written about topic.  As Tom Taylor pointed out, we started off in the mid-2000s with ecommerce experiences focused on how to get bricks and mortar shoppers to part with their cash online.  Debates raged, padlock icons proliferated, books were written.  But while ‘trust’ online is a largely solved problem as far as commercial design practice goes, AI changes the game, to the extent that we need to ask ourselves these questions anew.

AI technologies, leverage data – your data – in ways that can breach implicit notions of trust;  are you okay with Google tracking your every move, off and online? AI is able to infer things about you that you may not be comfortable with; would you be happy that anyone with a photo of you could figure out your sexual orientation? So it’s only natural that we, as designers, stop to ask some hard questions about not only how to establish or engender trust, but also how to deserve it.

Earned vs engendered trust

I think it’s worth stopping here to dwell on this distinction. In a commercial setting designers are sometimes asked to look at how to make users trust an experience using the UI itself while leaving the underlying service and proposition unchanged.  This is what I call ‘engendered trust’ (and what Matt Jones called ‘Trustiness’) – a bigger padlock icon, a clean smiling headshot, high quality visual design.  But ultimately, what’s more important is whether the service or experience actually deserves trust. Will details be lost to hackers? Will your purchase arrive?

This is what I call ‘earned trust’.

While as designers we are often asked to focus on creating designs that engender trust, most of the conversation at the Google Design Breakfast focused on how to earn trust.  I think that’s appropriate, since it is fundamentally more important to get this right.  Although, sadly, right now it often ends up putting the solutions out of the reach of most designers’ sphere of influence.  But the more we talk about it, the more influence we can have on the broader product organisation.

Impact on organisations

Trust is a vital ingredient to a successful user experience. Without it your users will simply disengage. But it goes wider than simply the digital user experience; companies need peoples’ trust for their brands to maintain their effectiveness.  As Rachel Coldicutt pointed out, the recent hack at Equifax will have done huge damage to their brand, as demonstrated by their nosediving share price and the recent exit of their CEO.

Companies must create services that customers can genuinely trust, as a business imperative.

In addition to this, governments are taking action to push companies in the right direction. GDPR, the new General Data Protection Regulation, is coming into force in March 2018. It’s an EU wide piece of legislation (which won’t be affected by Brexit) and will replace the good old UK Data Protection Act. Amongst other things it requires companies to gain informed consent for their data collection and provides users with the right to access, correct, delete and restrict the processing of their data. Companies in breach can be fined up to 4% of their entire global revenue! Unsurprisingly the regulation has already had quite a bit of impact; for example, Wetherspoon’s recently decided to delete their entire email database.

What to do?

We are very much in the early days of AI as a technology, let alone as an integral part of user experiences, so there is everything to play for.  And the panel had lots of interesting ideas to suggest.

Work it out in a play pen – Sarah Gold’s agency, IF, experiment with models to create trust in AI by building toy use cases in the design studio. One example she talked about was a bot that tries to predict when you want tea – presumably pooling charitable tea making efforts by junior designers 😉

Create a ‘Trust techmark’ – Rachel Coldicutt’s think-tank has been thinking about a trustworthy techmark “to indicate responsible and trustworthy digital products and services, to enable people to make more informed choices when selecting technologies to buy or use”.

A principled editorial policy for code – Another interesting suggestion was that digital product organisations should have a clear set of principles designed to ensure trust; perhaps something along the lines of Google’s “Do no evil”. These could be applied to algorithms, code, interactions and business models, just as a newspaper’s editorial policy is applied to its journalists’ prose.

A language for understanding AI decisions – AIs, particularly those of the deep learning variety, make unfathomable decisions. They assimilate millions of data points, along thousands of dimensions, and factor them all into their recommendations. The human mind simply can’t grapple with that kind of decision. Sadly, for us ‘meatheads’ to understand what’s going on, the reality needs to be simplified … in the same way that we explain complicated realities to our young children. A lot of research is currently happening in this space – as ultimately we need to find a language that can express why the “computer says no”,  in a way that people can genuinely engage with.


Looking back on the discussion, it’s apparent that while it is a critical issue for user experience, many of the key levers we need to make a difference are not solely in our hands.

To really make the difference the design community, and us as individual designers need to broaden the conversation out.  We need to involve technologists, digital executives and product managers.  We need to get them on-board and work with them to drive the change we sorely need to make AI truly serve its users.

We’d love to hear your thoughts about AI and trust, so please do get in touch at hello@rma-consulting.com.

You can read more about AI technology on our blog post:

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *

Comment *

Name *
Email *
Website