5th February 2025
Hilton London Canary Wharf
Search
Close this search box.
Notify
Notify

The Ethical AI Imperative for corporations

Artificial Intelligence (AI) is changing lives. It is transforming diagnosis and treatment throughout healthcare, improving patient outcomes. It is accelerating drug discovery, has the potential to drastically improve road safety and, through robotics, is unleashing new manufacturing productivity and quality. But the speed with which emerging technologies such as ChatGPT have been adopted by individuals is raising the ethical and political implications of AI adoption to the very top of the agenda.

For all of the benefits that AI can undoubtedly offer, if algorithms do not adhere to ethical guidelines, is it safe to rely on and use the outputs? If the results are not ethical or, if a business has no way to cannot ascertain whether the results are ethical, where is the trust? Where is the value? And how big is the risk?

Ethical AI is ‘artificial intelligence that adheres to well-defined ethical guidelines regarding fundamental values, including individual rights, privacy, non-discrimination, and non-manipulation.’ With organisations poised on the cusp of an enormous step change, Peter Ruffley, CEO at Zizo explores the ethical issues affecting the corporate adoption of AI, the importance of trust and the need for robust data sets that support robust bias checking.

Pandora’s Box

Calls from technology leaders for the industry to hold fire on the development of AI are too late. Pandora’s Box is wide open and, with the arrival of ChatGPT, anyone and everyone is now playing with AI – and individual employee adoption is outstripping business’ ability to respond. Today, managers have no idea if employees are using AI, and no way to tell if work has been carried out by an individual or by technology. And with employees now claiming to be using these tools to work multiple full time jobs, because the tools allow the completion of work such as content creation and coding, in half the time, companies need to get a handle on AI policies fast.

Setting aside for now the ethical issues raised by individuals potentially defrauding their employer by failing to dedicate their time to the full-time job, the current ChatGPT output may not pose a huge risk. Chatbot-created emails and marketing copy should still be subject to the same levels of rigour and approval as manual content. 

But this is the tip of a very fast expanding iceberg. These tools are developing at a phenomenal pace, creating new, unconsidered risks every day. It is possible to get a chatbot to write Excel rules, for example, but with no way to demonstrate what rules have been used or data changed, can that data be trusted? With employees tending to hide their use of AI from employers, corporations are completely blind to the fast-evolving business risk. This is just the start. What happens when an engineer asks ChatGPT to compile the list of safety tasks? Or a lawyer uses the tool to check case law prior to providing a client opinion? The potential for disaster is unlimited.

Recognise Risk

ChatGPT is just one side of the corporate AI story. Businesses are also rapidly embracing the power of AI and Machine Learning (ML) to accelerate automation in areas such as health and insurance. As a result, the rate of AI adoption by UK businesses is expected to reach 22.7% of companies by 2025, with a third of UK businesses expecting to have at least one AI tool by 2040, according to research commissioned by the Department for Digital, Culture, Media and Sport (DCMS).

These technologies are hugely exciting. From healthcare to education, fraud prevention to autonomous vehicles, these AI and deep learning solutions have shown outstanding data recognition and prediction performance, especially for visual recognition and sequential data analysis/prediction tasks. 

But – and it is a huge but – can businesses trust these decisions when there is no way to understand how the AI drew its conclusions? Where are the rigorous checks for accuracy, bias, privacy and reliability? If AI is to realise its potential, tools must be robust, safe, resilient to attack and, critically, provide some form of audit trail to demonstrate how conclusions have been reached and decisions made.

Trust Requires Proof

Without this ability to ‘show your workings’, companies face a legal and corporate social responsibility (CSR) nightmare.  What happens if the algorithms are shown to operate counter to the organisation’s diversity, equality and inclusivity (DEI) strategy, and that bias and discrimination have been embedded in decision-making as a result? 

The Cambridge Analytica scandal highlighted the urgent need for AI related regulation, and the power of AI has since continued its frenetic evolution without any robust regulatory or governance steps being put in place.

Rather than calling for an unachievable slow-down in AI development, it is now imperative that data experts come together to mitigate the risks and enable effective, trusted use of these technologies.  It is incumbent upon data experts to develop technology to support the safe and ethical operational use of AI. This can only be achieved if both the data being used and the output of the AI & ML activity is supported by appropriate data governance and data quality procedures, including the use of accurate, accessible data sets to check AI output for bias.

Collaborative Approach

In practice this requires the development of trustable components throughout the entire AI production pipeline to provide essential transparency that enables a business to understand how the AI reached its conclusions, what sources were used and why. Clearly such ‘AI checking’ technology must also be inherently usable, a simple data governance and risk monitoring framework that could both provide alerts in the face of exposed bias, discrimination or the use of questionable source data and enable the AI’s entire process to be reviewed if required.

The creation of a simple tool that can bridge the gap between a company’s domain experts and AI experts, will make it easier to understand and trust the AI system, giving companies the confidence to embrace AI and trust the output. 

Furthermore, there is a global need for data collaboration and data sharing – both within and between organisations – to expand the data available and add more context and accuracy to the morass of Internet only information.  This collaboration will be a vital part of the process to counter AI generated bias and discrimination which, together with AI ‘explainability’ will create a trusted view of the world where AI can deliver the tangible business value that organisations currently seek.

Conclusion

These changes must, of course, take place while AI continues its extraordinary pace of innovation. Therefore, while collaboration and technology that delivers AI trust are on the agenda, the next few years will not be without risk. Potentially large-scale corporate failure due to mismanagement of AI usage at both individual employee and corporate level is almost inevitable. 

As such, it is now imperative that organisations escalate the creation of robust strategies to safely manage AI adoption and usage, with a strong focus on the implications for CSR and corporate risk. While some organisations will, therefore, not progress as fast as others that rush headlong towards AI and ML, by taking an ethical approach to AI they will be safeguarding their stakeholders and, quite possibly, protecting the future of the business in the process.

YOU MIGHT ALSO LIKE

Leave a Reply

Your email address will not be published. Required fields are marked *