Looking towards a future of Human-Centred Artificial Intelligence

User Experience & Usability

A young female programmer using a laptop while working late in her office.

To celebrate World Usability Day 2020, we are exploring this year’s theme of Human-Centred Artificial Intelligence. A few decades ago, the concept of Artificial Intelligence (AI) was popularised in the realms of science fiction, where computers and robots became so intelligent that they both outwitted and turned against humanity. Although we have not seen any robot uprisings yet, today AI is a fully-fledged reality that is embedded into our lives. In this article, we explore AI’s potential to enhance and complement human capabilities, the risks it poses if it goes beyond our control, and how UX research can ensure that AI is developed in a human-centric way.

AI is already enhancing our digital experiences

AI is made possible by human intelligence: computer engineers use large datasets to program task-based algorithms which are integrated into digital products. If you have experienced any of the below, then the results you were given are likely to have been influenced by AI:

  • Received personalised recommendations on your favourite streaming platforms
  • Had brief conversations with a virtual assistant (such as Siri or Alexa)
  • Relied on a digital map to show you the best route from A to B
  • Asked a Chatbot to help solve a customer service query.

AI’s everyday use ensures that people get the most accurate and interesting results to meet their needs, whilst also enhancing the user experience of digital products. For example, the smart thermostat Nest includes internal sensors that use AI to remember a user’s heating and cooling habits in their home, whilst linking to their smartphone to understand when they are out. This is a good example of how AI can enhance the user experience of a product, as it essentially mirrors and takes over the task from the user.

AI is improving our healthcare

AI has also made breakthroughs in the world of work, helping humans to perform tasks more efficiently, safely, and accurately, whilst also lightening our workload. For example, recent research shows that AI outperformed doctors in diagnosing breast cancer by ‘reading’ and detecting malignant cells in over 29,000 mammograms. The results showed that there was a 2.7% reduction in false-negative diagnoses, meaning that the presence of breast cancer was spotted by AI faster and with much greater accuracy than three doctors combined. This use of AI has the potential to revolutionise healthcare and accentuates AI’s life-saving potential.

Medical technology concept with 3d rendering robot hand hold tablet with graphic display in surgery room

We are also at risk of losing control of AI

AI, however, is at a crucial junction. Whilst its benefits in enhancing the way people work are already being observed in sectors such as healthcare and manufacturing, AI is also being met with trepidation and fears that we may lose control of the technology we create. It is important that AI not only enhances our work and lives, but that it is also trusted, safe and controllable.

Without user control or a basic understanding of the way AI works (by manipulating what we see), it can have a negative social and psychological impact. For example, social media platforms rely heavily on AI and algorithms to populate curated news feeds and keep their users engaged, which in turn, has been associated with the spread of fake news and exploiting psychological vulnerabilities using dark patterns. AI and algorithms on social media platforms have been described as “hijacking people’s minds”, leading many high-profile tech figures who have since deserted the platforms they designed, to urge people to take back control by deleting their social media accounts.

There has also been criticism that AI promotes biased results, based on the large, outdated datasets that are being used by some companies. For example, when one author entered the word “Girl” on Google Search, he noted that people of colour were completely absent from the results, and that the search results were based on pre-established datasets from when the company was predominantly composed of an all-white, male workforce. Racial bias is also a known issue on algorithmically programmed face-recognition software. In these cases, the end user is not in control of the results and are manipulated into seeing things that the company wants them to see, rather than results that are useful to them.

The solution: bridging the gap between UX and AI

Historically, the engineering teams that built algorithms and AI did not incorporate much user involvement or research into their processes, which in-turn, was unfavourable to a human-centred experience for the end user. AI’s boom in recent years, its integration into the products we use, and its influence on how people think now calls for a more human-centred approach. This will ensure that we, as users can trust AI, and that AI can uphold human values.

To achieve a more human-centred AI, researchers have suggested integrating UX processes into the development of AI technology. Their suggestion is to develop AI prototypes in parallel with focus groups and workshops involving end users, with rigorous and iterative user testing throughout. UX teams will then be able to compare results based on how users feel the AI prototype performs in terms of their usability, effectiveness and their trust in the AI system.

Mobile screen shot of goggle flightsThey also suggest that UX teams should design “Why” statements into the user interface to clearly explain the AI-generated results on screen. This will ensure users have a level of control and are able to critically assess whether the AI generated information is relevant and trustworthy, providing a shift in the “computer is always right” paradigm that is present in many digital products. This is demonstrated by the team behind Google Flights who tell users what type of data is being used in their flight price predictions, to help users apply their own judgement of which price better suits their expectations. They also visualise AI generated results in easy-to-understand graphs, with each one explaining the level of confidence users should have in prices changing or staying the same. This example demonstrates that user research can guide the way AI is implemented into our digital products, to ensure users remain in control, whilst allowing AI to complement our lives rather than to overtake it.

Developing AI systems and embedding them into digital products in a human-centric way is key to providing a more reliable, trustworthy and ethical future relationship with AI. By merging user research, UX methodologies and AI development, we can give AI the opportunity not only to enhance and complement the way humans live and work, but to ensure it can be trusted and does not slip from our control.

Our experienced teamSpeech bubble can help with all areas of user experience; get in touch to discuss how we could help you…

More like this

Celebrate World Usability Day

User experience refers to a person’s emotions and attitudes about using a particular product, system or service. Good UX is about making these ...

User experience refers to a person’s emotions and attitudes about using a particular product, system or service. Good UX is about making these products, systems and services easy to use...

Emotion AI: changing the face of UX? (Part 2)

There are several benefits to emotion AI, which all link back to providing an optimum experience for the user. We summarised our top five...

There are several benefits to emotion AI, which all link back to providing an optimum experience for the user. We summarised our top five...

User Research & Insights

User research early in a design process provides you with valuable insights into your users’ behaviours, needs and desires...

User research early in a design process provides you with valuable insights into your users’ behaviours, needs and desires...