Katie King • July 4, 2023
It has been a busy few months, with Keynotes and training opportunities taking me all over the world. I have had the honour of heading to the Netherlands, South Africa, Switzerland, the Faroe Islands, Monaco, Jordan, and beyond. On these trips, I have engaged in amazing and insightful conversations with professionals from various industries and job functions about their challenges and opportunities. But the one topic that always seems to come up is ethics and regulation.
At present, AI is still technically unregulated. The EU is very close to enacting their AI Act, which would then become the world’s first official AI law. This will likely spur other nations to follow suit and serve as the blueprint for subsequent AI laws. In the meantime, various countries—including the Vatican—have published their own guidance and best practice to help provide guardrails. Industry and trade bodies have also done the same.
But even so, many still fear AI and worry about its potential to cause harm. Is it possible for AI to be ethical, and how can that be achieved without proper regulation?
Main Ethical AI Concerns
Ethics are a major concern for any new technological advancement, not just AI. Any new, unexplored territory requires consideration and some governance so that it cannot turn sour or cause harm.
Ethical AI is any use of AI that does not cause harm and does not contribute to harmful societal structures. Technology should make our lives better, not oppress us or widen existing gaps within our society. In the case of AI, the two primary concerns are privacy and bias and the resulting impacts that improper use of technology and poorly trained algorithms can have on society at large.
AI is purely data driven. Unless we feed it information, it cannot do its job. Ethics come into play when we start to consider where this information is coming from, how it is sourced, how it is used, and how it is managed. Are data collection methods transparent and non-invasive? Are customers aware that their data is being collected and used? There is often an unspoken agreement between businesses and their customers these days wherein the customer agrees to let the business collect and use their data in exchange for better, more tailored experiences. Alongside this, there is an expectation that their information will be used responsibly and safely.
We have seen the fallout from various data breaches over the years. When these breaches occur, businesses bear the consequences both operationally and reputationally. While there are no solidified AI laws yet, there are data protection regulations in place in the form of policies such as GDPR. These guidelines do offer some protection, but many fear that there are too many gaps and loopholes when it comes to AI.
Potentially more dangerous is the impact of bias on AI use. While AI is not inherently biased, it is a product of the data it is trained on. If you put rubbish in, you get rubbish out. Therefore, we need to be mindful of what we are inputting into these algorithms and how that might impact the results produced. For example, we have previously seen issues with the use of facial recognition in law enforcement, as it was resulting in racial profiling. The algorithms behind these tools were trained on a biased data set that taught the AI that individuals of certain racial groups were more likely to be offenders, leading the technology to categorise innocent people falsely and unfairly as threats. These technologies may be well intentioned, but if trained improperly or not given proper oversight, they can become harmful.
Bias can be extremely harmful in people-focused business functions such as HR and marketing. Say, for example, you’re looking to use AI for your hiring and recruitment. You would train the algorithm using data about your current staff and likely data from past successful employees. That information will train the algorithm to identify what a ‘successful’ candidate for your organisation looks like. That can be great for finding talent that may be a good culture fit. But consider this. If your organisation is predominantly male and white, you’ve just taught the algorithm to find more white male candidates. You may be overlooking candidates who would be perfect for the role simply because they do not match the criteria set by your data. That means less opportunities for groups who may already be underrepresented.
In marketing, an AI system may not effectively forecast or make judgments for specific groups if it is trained on data that is skewed towards one race, gender, or socioeconomic group, which could result unwittingly in prejudice. The products customers are recommended, the messaging they receive, and the experiences they are delivered could all be impacted by bias. Should this happen, it may damage the brand’s reputation, generate unfavourable press, and result in diminished sales and clientele.
This is a real problem right now, as AI is still widely unregulated though we have government and trade bodies working on that. But it is also a concern for the future, because if we don’t start off on the right foot now, we will only make the problem worse down the road.
Ethical AI Behaviours
So, what can be done? If there is no legal guidance for AI, how can we ensure it does not cause harm to ourselves, our businesses, our stakeholders, and our society at large?
We as individuals have no say in how the AI tools themselves are created, but we do have power over how we use them. We also have a distinct advantage over technology: the ability to determine right from wrong. AI cannot make moral judgements the way that we can. It lacks the context and rationality that we as humans possess.
We saw a prime example of this play out recently with KFC in Germany. The company’s marketing team trained an AI algorithm to monitor a calendar of events and holidays and send out push offers to customers related to that event. No one considered that Kristallnacht—which is the event that is largely regarded as the start of the Holocaust—was included on that calendar. As a result, the bot sent out a message to customers telling them to celebrate Kristallnacht with cheesy chicken. This of course sparked outrage and the brand was forced to apologise.
While it is easy to point the finger here and say that AI was in the wrong, that is not actually the case. AI is a specialised technology trained to complete the specific tasks it is created for. In the KFC incident, AI performed its job exactly as it was supposed to. It followed the calendar it was trained on, and sent the offer. This is not a failure of technology. It is a failure of human oversight.
That is the key to using AI ethically in an unregulated world. Humans need to remain in the loop and work in partnership with AI rather than leaving it to its own devices. AI is not perfect. It is not all knowing and all capable. Humans still have a part to play. It is on us to ensure that the decisions we make using technology are not harmful, and we need to be aware of potential risks.
I have worked with many businesses to create their own ethical frameworks for their teams, and have trained many professionals on what to look out for. If you are interested in a training session, consulting, or booking a Keynote, get in touch with our team.
© AI in Business | 2024