The Evolution of Artificial Intelligence within Corporate Governance – An Overview

Author: Shaun Johnson

Historically, corporate governance (the system of rules and practices that direct and control a company) has been a challenge for multinational corporations seeking to expand whilst maintaining durable controls over commercial activity. The desire to maximise shareholder value has led to boards failing to oversee the prevention of unethical practices, with consequential bankruptcy and liquidation in cases such as Enron (link) and Lehman Brothers (link).

Therefore, the rapid introduction of Artificial Intelligence (“AI”) into the global economy has led to renewed focus on corporate governance of large companies seeking to utilise its benefits. AI’s role in governance includes exciting opportunities for potential use as a non-voting board observer/adviser as regulatory frameworks evolve. It also introduces unprecedented degrees of efficiency in compliance functions of global business.

However, the relatively unregulated field of AI poses risk for many commercial entities, with new legislation being introduced across many jurisdictions to prevent misuse. AI integration will continue to be a high-risk area, with the Legal Tech Society (“LTS”) at King’s College London seeking to help students understand this evolving issue for companies across the world.

AI as a board member or observer

The growing synthesis of predictive and generative AI has unlocked new opportunities for AI to be used as part of decision-making in boards. In many jurisdictions, it is a requirement that a director of a company be a natural person (a human), with some also permitting a legal person (a commercial entity). AI does not fall into either category – however, recent developments have meant that AI is now actively involved in boards in a different capacity.

For example, the Real Estate Institute of New South Wales (“REINSW”) appointed an AI chatbot, named ‘Alice Ing’, to bolster its board. This was the first appointment of AI as a board adviser in Australia, marking a notable shift towards the use of technology in the jurisdiction. REINSW members have claimed that Alice Ing possesses an IQ of 155, alongside detailed knowledge of the real estate market to help the board of REINSW make informed decisions for its investors (link).

This use of AI is far from insignificant when considering the footprint of companies actively using AI as a board observer. The International Holding Company (“IHC”) is one of the largest holding companies in the world, listed on the Abu Dhabi securities exchange and possessing a diverse portfolio of investments. Like REINSW, IHC has recently ventured into using AI as a board observer through Aiden Insight. The role of Aiden Insight, like Alice Ing, includes real-time analysis of both internal and external data to produce strategic recommendations to board members of IHC (link).

Since the introduction of Aiden Insight, IHC has reported strong financial performance and has outpaced its regional peers in the Middle East. IHC is reported to have achieved AED 54.7 billion in revenue through the first two economic quarters of 2025 (link), following a reported market capitalisation of AED 899.4 billion in November 2024 (link). Chief Executive Officer of IHC, Syed Basar Shueb, highlighted the dynamic capabilities of the AI in providing insightful recommendations for IHC (link).

IHC has sought to invest in the development of Aiden Insight as part of commercial strategy, with the second version of the technology launched in May 2025. This successful integration of Aiden Insight has been a notable sign of AI’s credibility in this capacity. Specifically, the second version now introduces a voice interaction feature alongside enhanced portfolio analytics capabilities through a live newsroom (link).

AI in compliance monitoring

Another area where AI has been transformational is monitoring compliance of employees when dealing with the complex regulatory frameworks which multinational corporations must navigate. Historic failures in corporate governance have often been due to inadequate controls.

For example, Wells Fargo was faced with a damaging scandal pertaining to employees creating unauthorised accounts to meet internal sales targets. When uncovered, the corporate governance failures were extensive – an internal report produced by Wells Fargo alongside Shearman & Sterling revealed that departments including Legal, Audit and HR failed to actively tackle the issue despite internal warnings (link). The board remained unaware of the fraudulent practices (which commenced in 2002) until 2014 when external news stories revealed them.

Compliance monitoring is an area where AI could help address these issues to prevent the board from failures in oversight across different business departments. For example, AI could be used to monitor account creation and transactions to flag unusual patterns as part of strengthening anti-money laundering and counter-terrorist financing frameworks. This remains a key function of all businesses and especially relevant for sectors exposed to increased risks of fraud.

Many financial institutions have sought to develop AI for maximising efficiency in compliance monitoring for fraud – JPMorgan's use of COiN serves as an example of a market-leading investment bank using AI to improve this function. The software analyses key clauses in financial contracts such as credit agreements and derivatives, with the JPMorgan tech blog testifying that the AI can review 12,000 documents in seconds whilst manual review would take weeks (link). Such efficiency is a key area for reducing operational costs in an increasingly competitive sector of the economy.

Litigation risks for AI use

Despite the strategic use of AI producing impressive results for many commercial companies, there have been cases of AI hallucinations leading to costly litigation. Risk management in this area is key for strong corporate governance controls, as the breadth of AI use requires meticulous planning from boards to ensure that risk is appropriately dealt with.

For example, AI generated reports may contain hallucinations which are subsequently relied on by boards for decision-making surrounding commercial strategy. It is therefore crucial to have corporate governance frameworks which emphasise a ‘human-in-the-loop' approach so that litigation risk can be managed by trusted and experienced professionals.

Litigation risks span multiple areas, including misrepresentation, data protection breaches, and defamation, all of which can be amplified by AI-generated outputs such as hallucinations. These risks could manifest through an increased use of class actions, particularly in the United States, where mass tort claims remain a dominant form of consumer redress. Therefore, corporations may seek to address this risk as part of drafting an effective corporate governance framework.

Concluding note

LTS is excited to engage students on this particular area of risk management, as corporate governance is continuously evolving in response to major economic developments and crises. The global economic crisis of 2008 prompted a revolutionary shift in global standards for corporate governance. The increased use of AI may lead to a similar revolution in the way in which governments address standards for corporate governance. This is particularly acute in the context of listed companies whose directors owe stringent fiduciary duties, the breach of which may impact millions of retail shareholders.

Importantly, LTS values the engagement of aspiring lawyers due to their role in preventing the misuse of AI. This is pertinent since law firms have been previously pursued for poor controls over AI use. As future legal advisors to many corporations, LTS hopes to encourage students to continue learning about the appropriate use of AI tools so that they can maximise their success in future legal careers.

Next
Next

The EU AI Act’s Compliance Challenges