Lander & Rogers logo
1 Insights

AI Governance: what directors need to know

AI Governance: what directors need to know

The rise of generative AI systems, such as ChatGPT and Copilot, has accelerated the exploration and use of AI systems by businesses. Many businesses will continue to explore AI use cases to transform work practices. Proper governance oversight of an organisation's development and adoption of AI systems is crucial to the overall success of an AI system's implementation. Directors play a crucial oversight role to drive their organisation's responsible, trustworthy and safe use of AI systems.

Here are five key issues directors need to know.

The board of directors must set AI strategy

The board of directors must set their organisation's AI strategy and ensure the strategy is compatible with the overall strategic direction of the organisation. An AI strategy acts as an organisation's roadmap, providing a pathway and the necessary direction staff require to confidently trial, test and implement AI systems. A lack of direction leads to wasted time trialling new AI systems, poor adoption and limited return on investment.

AI expertise is a must, not a nice to have

AI systems are different to non-AI systems. Consequently, organisations must have dedicated AI specialists at all levels of an organisation to be able to assess, develop and deploy AI systems responsibly and safely. Directors must have a fundamental understanding of AI to properly oversee their organisation's deployment of AI systems, AI related risks and to discharge their directors' duties. Read more on AI and directors' duties.

Enterprise risk management must consider AI-specific risks

Directors have a duty to oversee the management and mitigation of organisational risk, including setting their organisation's AI risk tolerance. An organisation's risk management system must consider AI-specific risks. The board of directors must ensure their organisation has a culture of risk management and systems that properly identify and manage AI risks.

The AI regulatory landscape must be continuously monitored

Currently, Australia does not have AI specific laws in place. However, the regulatory landscape is continuing to evolve. At this point in time, the Australian Government has signalled an intention to regulate only where necessary and take a "regulation lite" approach. Nevertheless, regulators are closely monitoring AI use in their respective areas of regulatory responsibility. For now, we expect AI development and use to be regulated under existing laws and enforced by regulators within their sphere of responsibility.

Employees are an organisation's most valuable asset

AI technology has transformational capability yet can struggle with basic tasks. Current AI system capabilities can support staff with existing tasks, as opposed to replacing staff completely. Consequently, the role of an organisation's employees cannot be overstated. Humans continue to play an important role to ensure the safe and accountable use of AI systems. Human review of AI outputs by staff should be embedded at key points in any workflow that includes the use of a generative AI system or tool.

All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.