AI can change the world by helping remove biases. More on that in a bit. But it can also help you interact with the Amitabh Bachchan video bot, launched by Jio. It’s an example of how interactive video will be used for self-driven learning, though in this case only about his new movies.
I read a lot of Mills and Boon novels – I also read Georgette Heyer in my defence – while at engineering college. There wasn’t much else to do on hot summer weekends in Trichy. These books often had a theme of a powerful man falling in love with less endowed (wealth and power-wise) women. Mills & Boon has a whole series devoted to “Doctor – Nurse” romances which sound horribly inappropriate, in retrospect, but which I read completely untroubled in my 20s. Directly or indirectly what we read, watch, hear influences our values and understanding of right and wrong. The #metoo movement exposes how and why bad behaviour has been condoned and tolerated for nearly 50 years, and also why there is a new wave of awareness and protest. Just as books with casual racism were taken off shelves, we need to relook at content – books, videos, games, ads – that trivialise or normalise marginalisation of women or any minority group.
Now imagine that your shiny new AI algorithms are trained on cultural assets such as these books — they would naturally assume that nurses were women (and pretty) and men were doctors (dark, rugged, handsome). In fact, IBM used its Watson API and AI algorithms to analyse Bollywood movies and Man Booker shortlisted novels and the findings were that men are described with words like hardworking, brilliant, honest, clever, whereas women are often pretty, young, helpless. AI trained on this biased information would naturally make decisions based on its prejudice, and become a tool to perpetuate the bias. Worse, unlike in the case of humans, the algorithms will not make exceptions. But trained properly, AI gives us a chance to break free from the past and guide humans away from their ingrained biases.
As AI starts getting used in areas like evaluation and recruitment it is really important that business leaders get deeply involved in understanding the algorithms they own and operate, and the underlying data that was used to train the AI. Which are the protected attributes for your organisation? Will you discriminate on the basis of gender, location, age, surname, operating system, education, race, profession, spend value, income? Have you established what’s ok and what is not? Shouldn’t every org have an AI policy that lays out what it considers alright to use as an attribute for discrimination/ differentiation?
As a customer I already feel that some algorithms are exploiting their knowledge of me, my predictability and brand loyalty. I’m looking for the incognito equivalent for an app where I do not have to reveal myself till the transaction is completed. I have also wished in some situations that I was dealing with a smart bot rather than the customer service executives (as in the case of a leading travel portal whose employees assured me that my booking had disappeared despite my showing confirmation emails and app snapshots). It is going to be a period of extreme change in how we assess customers and business prospects and it’s important that we get it right.