Daniel Hulme (PhD) is one of the world’s leading experts on Artificial Intelligence (AI) and emerging technologies. He is the founder and CEO of Satalia, an award-winning company acquired by WPP in 2021 that provides AI products and solutions for global companies such as Tesco and PwC. Satalia’s mission is to create a world where everyone is free to innovate and that those innovations become free for everyone. Daniel is now also the Chief AI Officer for WPP and helps define, identify, curate and promote AI capability and new opportunities for the benefit of the wider group and society.
Lewis Silkin’s Alan Hunt & Cliff Fluet recently caught up with Daniel to get his expert take on all things AI.
Alan Hunt & Cliff Fluet (AH & CF): We’d like to kick things off by setting the scene for our readers. Daniel, could you explain about the two definitions of AI.
Daniel Hulme (DH): The first definition, which is unfortunately the most popular definition, is ‘getting computers to do things that humans can do.’ The premise here is that because of machine learning, and advances in technology, we’re now able to get computers to do things that traditionally only humans were able to do. This definition predisposes that humans are the definition of intelligence, which is something that I don’t personally agree with. It’s clearly not sensible to benchmark machines against humans in this way. The second definition derives from a much better concept of intelligence. This is ‘goal directed adaptive behaviour’. In other words, you can achieve a goal by making certain decisions or using different behaviours. The key word though in this definition is ‘adaptive.’ By this I mean systems that can adapt themselves to learn whether decisions are good or bad in order to make better decisions in the future. Both definitions are very different flavours of AI that raise different problems and ethical concerns.
AH & CF: Speaking of ethical concerns, it would be remiss of us not to delve into this in more detail. What are your views on AI ethics?
DH: I think people often confuse AI ethics with AI safety. Most problems that we see in AI at the moment are safety problems in terms of ensuring that systems are doing what we intend them to do. I would argue, somewhat controversially, that there’s in fact no such thing as AI ethics and that’s because there isn’t a system that I know of that creates its own intent. Humans create an intent, for example to maximise profit, and they use systems to achieve that intent. It is therefore not the system that’s creating unethical decisions, it’s the intent behind it.
Where ethical concerns do rear their head is where companies use AI to exploit the vulnerabilities of their customers. An example I like to use is that of ride-hailing companies. It is not inconceivable that the AI that these companies use could realise that when people’s battery on their phone is low, they’re willing to spend more money to get home. These ride-hailing companies could then choose to exploit this by raising the price. This is where the line between what is acceptable and what is exploitative becomes quite blurred.
AH & CF: Another concern that people have to do with AI is this idea that it’s going to lead to mass unemployment. Do you share this concern?
DH: Yes and no. There is a concern that AI could replace lots of jobs and that people won’t be able to retrain quickly enough for our economies to cope, but I think there is a way to combat this which is to create new economies of value exchange. By this I mean trading something other than money in exchange for goods. This model already exists online – the reason we get access to Google for free is because we’re giving them our attention and data which they can then use to extract value, through advertising or by selling our data. This is how they’re able to convert people from ‘freemium’ to ‘premium’. What I’m really interested in is how we can replicate this concept within the physical world. The example I like to use is this: instead of exchanging money for getting my hair cut, could I exchange my time and resource instead? In other words, in the 30 minutes it takes to get my hair cut, rather than me paying for that service, could I do a task for an organisation, so that they’re then effectively paying for my hair cut. It is this that might then take the edge off the world of ‘no work’.
AH & CF: Looking ahead, what does the future of retail, both physical and digital, look like to you, and how do you think this new model of exchange could fit into that?
DH: The most common and well-established answer to this question is that the future of retail is going to be a hybrid, omnichannel experience for consumers. What I’m more interested in is an emerging concept called ‘optichannel’, whereby organisations create digital twins of themselves. A digital twin is essentially a simulation of your entire organisation. The benefit of a digital twin is that by creating this connected digital simulation, you’re able to see what impact your marketing demand is having on your supply chain to then be able to fulfil your customer, or at least identify what you need to change in your supply chain to be able to fulfil your customer. The interesting extension to this is that if you have capacity in your supply chain, such as extra stock or vehicles, you could use that to create targeted marketing campaigns to drive people to buy that extra stock.
In answer to the second part of your question, I think it is essential to incorporate into the future of retail this new model of exchange I’ve talked about and this is something that I’m passionate about. I’m interested in creating businesses where you can offer a ‘premium’ version for customers which they pay for but also a ‘freemium’ version where they can access something for free because of this alternative value exchange. This could work across a range of industries, including food, healthcare, education and energy.
If we think back to my ‘haircut’ analogy, I think the extension to this is that as we start to live more in a ‘metaverse’, our avatar or digital twin might also become ‘valuable’. What I mean by this is that because our avatar would be developed based on all of our interactions in this digital world, perhaps this model of exchange could work via your avatar in the metaverse. Retailers could use this model then in both physical and digital spaces, building on this hybrid experience for customers.
AH & CF: Finally, what advice would you give to brands on how they can use AI to deliver real value for their customers?
I believe that AI can allow brands to understand the nuances of what customers value and begin to deliver goods that fulfil these values. There is a hierarchy of values when it comes to customer fulfilment, whether that be social, emotional or functional needs. I’m hoping that over the next decade, we’ll start to see companies bring in a much more holistic set of values that address these needs. Customer fulfilment is much more than the ‘faster better cheaper’ model we see everywhere at the moment – I don’t think this can work as a long-term business model. Delivering goods quickly only fulfils one need; it doesn’t provide hope, motivation, wellness, fun or any of the other qualities that enrich our lives.
In conversation with Daniel Hulme, CEO of Satalia
Request a PDF copy of our Business in 2022 report.
Want to join The Collective, and contribute to the debate?
Email us at: The.Collective@lewissilkin.com