Businesses run the risk of embedding new AI-driven processes that exacerbate bad habits and entrench prejudices rather than streamlining processes. This was a key discussion point at a panel event hosted by DPL in London recently.
On the panel were Dr Dimitris Vlitas, Visiting Lecturer at the University of Toronto and Head of AI at DPL, Emma Wright, Commercial Technology Partner at law firm Kemp Little LLP and Richard Robinson, Chief Commercial Officer at DPL. The panel was chaired by Russ Shaw, Founder at Tech London Advocates and angel investor. Russ has held executive positions at Skype, Telefónica and O2.
The discussion focused on the challenges of regulation and how institutions and businesses can work together to develop a deeper level of trust in AI amongst the wider population and dispel any common misconceptions.
The first section of the discussion focused on the risk of bias in AI.
Dr Vlitas set out his reading of the complexity of the situation. The distortion of AI, he said, can come from many sources. It might be the data sets that algorithms are trained on or it might be the intrinsic bias of the trainers. For instance, he explained, the cohort building and training of the algorithms might tend to be of a certain gender, ethnicity or socio-economic background. He added that it might also be that awareness of the bias is skewed. Let’s face it. AI is simply a microcosm of society. Unconscious bias can easily be built into the data. For instance, minorities are likely to look for biases and issues that others may not be aware of or appreciate. The panel agreed that it is vital that diverse groups are getting into AI and Data at grassroots, choosing to study STEM subjects.Dr Vlitas also pointed out the intricacy and the scale of the challenge. In Natural Language Processing (NLP) models that are trained on Wikipedia data, for example, algorithms might have linked the pronoun he with friend and the pronoun she with lover. Similarly, they might have previously linked He with Scientist and She with Secretary.
Dr Vlitas concluded that it is crucial to remember that AI models are, at the end of the day, just mathematical models. They are not judicious. They are process-driven. A vital step in making AI ethical and unbiased is turning the focus on society itself and looking at the general biases.
The second section of the discussion focused on AI as a force for good.
Emma Wright pointed out that AI is already being used to make great headway in the fight against climate change and the health industry; nevertheless we need to ensure there are appropriate checks and balances in the practice and employment of AI. It's vital, she added, that we take a principled approach to AI Ethics in order to keep up with the pace of technological evolution. It is also very important for us to build a layer of human oversight with true regulatory power. Richard Robinson added that AI is very powerful, but with great power comes great responsibility.
The third section focused on AI and Regulation.
Emma Wright pointed out that there is a need for regulation both on the input and output sides of the AI Algorithms. But, critically, she asked, who is going to enforce such regs? Regulations, for example GDPR, can be particularly difficult for companies to interpret and Implement, not to mention expensive. Let's be clear, she added, GDPR was not a new thing; it was the result of an evolutionary process, most of it was already legally codified. People only took it seriously because of the massive fines attached to it. Regulation is only as powerful as the power vested in it. Richard Robinson said that some companies are still struggling to interpret and implement GDPR, despite regulation being in place for over a year. Inconsistent international standards were also problematic. Regulation was important, but in a world in which technology makes borders porous, different levels of regulation had commercial consequences based on geography.
Embedding Good AI Practice
Dr Vlitas and Richard Robinson had the following advice for startups: Good AI practice and principles need to be embedded into the very DNA of your business as you're building it. We say Data is the ‘new oil’; but we don't regulate electricity for example [alternative version] but we don’t regulate the oil itself. We regulate its outputs, for example emissions from the fuel used in cars. That said, we have seen the misuse of standard certification in the auto industry. AI tools should show they are ethically built but there is an issue that some algorithms are black boxes so you can't even open them up to look inside and interrogate the them..
Finally, the panel discussed whether we are in the midst of an AI hype cycle and whether it is capable of delivering what people profess it can.
The collection conclusion was yes to both. Yes, we are in a hype cycle, but there is also some fantastic, revolutionary work being done around cancer detection for example. On a more cautionary note, it was pointed out that when amazing stuff is built, it will take a while before there is widespread adoption. Furthermore, some people indeed do overhype massively the capacity, capability and accessibility of AI. They over-promise and under-deliver. Already, though, where implementation is optimal, remarkable things have already been delivered with AI at the heart of them.
Dr Vlitas stressed: It is really important to be careful about your source of information. Stick to reputable and appropriately nuanced sources.
All were of the view that we should be careful of making sweeping predictions. It is always important to interrogate these and probe the metrics they are using for these predictions (as well as the rationale).
The trough of despair, a structural element of any technology cycle, will inevitably come. The potential for AI is real, it is with us now, it will get much, much better and at times considerably worse. Businesses need to embrace it or lose competitive advantage, but caveat emptor; there are many snake oil salesmen out there.