Insights

The new digital giants

Digital Economy
Hand cupping digital hologram of justice scales, indicating a balanced relationship between humans and AI

Only a few years ago, the former Australian Competition and Consumer Commission (ACCC) Commissioner, Rod Sims, released the ground-breaking Digital Platforms Inquiry Report (2019). Two years later, one of the report's major recommendations, the News Media Bargaining Code, was enacted. Having identified anti-competitive practices by dominant digital players in control of search and online advertising, Australia's world-first code was designed to re-distribute revenue earned by Google and Facebook to media outlets, primarily as a means of funding local journalism and preserving the role of the media as the Fourth Estate in our democracy.

A mere two years on from the code's enactment, technology's tectonic plates have shifted again, this time driven by the developers of artificial intelligence (AI) and prompting worldwide calls for AI-specific regulation.

The rise of artificial intelligence

OpenAI's ChatGPT-4, funded by Microsoft and now integrated across its entire operating system, together with Alphabet Inc's (Google's parent company) Bard, have transformed 'search', with the large language model (LLM) artificial intelligence trained on masses of data and capable of producing 'results' with human-like synthesis. The CEO of Alphabet, recently described AI as "the most profound technology humanity is working on today" and that "it will touch every industry and aspect of life".

So, have the digital giants, targeted in the Digital Platforms Inquiry Report, been superseded in the AI era? If so, is this a new form of domination in need of new controls, is it a dominance for good, or is it a dominance that will again divert revenue away from domestic markets, eroding our own tax base, and towards more powerful super jurisdictions? If we have entered a new age of digital goliaths, what is the conduct to be cured and what sort of regulation can we expect to see?

The late Stephen Hawking said about artificial Intelligence in 2014: “the development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded". A contrarian view is that we are entering an age where we can fast-track menial and meaningless tasks and get on with curing anything from cancer to climate change, thus prolonging our human existence.

Impact on the legal sector

In the legal sector, many leading law firms have said "either adopt AI or be left behind", with various legal partnerships using Harvey (in which Open-AI and Alphabet have invested) and Co-counsel (powered by Open-AI), making junior lawyers and law students alike nervous about the future of the legal profession.

However, before the full consequences of AI can be appreciated and effective regulation formulated, it is important to define the key terms of reference. Many of us either assume we can know about technical matter in a few sentences, or we are simply too embarrassed to admit ignorance and so we opt to go along with the discourse, never stopping to truly understand. However, boards, CEOs and other decision-makers considering the adoption of AI, can no longer afford to skate over the detail given the wide-reaching embrace of AI in business, legal practice and everyday life.

Terminology

So, what is artificial Intelligence (AI)?

AI is branch of computer science. It is not a branch of the arts, humanities, medicine or mathematics. It's a science whereby systems continuously learn to program 'outputs' and act autonomously in accordance with certain rules.

There are currently three types of AI technology with only one being in operation today. They are Narrow AI, General AI and Super AI.

  • Narrow AI (NAI) refers to systems that currently exist and that have been trained to carry out specific tasks.
  • General AI (GAI) is a theory that is meant to mimic complex thought processes and self-learns from its operating environment.
  • Super AI (SAI) is a system that is self-aware, would surpass human intelligence and out-perform all humans in everyday tasks. It would also excel in all areas of science, medicine and law.

Many experts are apprehensive about the development of GAI and SAI for their potential threat to humanity but recently even NAI has been thought to be extremely dangerous in the wrong hands, prompting calls to pause the wider deployment of AI.

In some ways, we seem to be in familiar territory characterised by the adage 'it's not guns that kill people, it's people that kill people'. Put another way, 'it's not AI that kills people, it's people who carry out AI's instructions that kill people'. The critical question seems to be: should those 'instructions' or should that 'knowledge' be so readily available to anyone, including bad actors?

Current AI applications

ChatGPT

ChatGPT is a large language model AI, designed by US company OpenAI, and now in partnership with Microsoft. ChatGPT is a system trained on large amounts of data with a cut-off date. It is an example of a NAI, designed to provide a response or output to an input.

The way it works is by learning patterns from masses of data to identify relationships between words and phrases. ChatGPT processed the input and through the design of the algorithm provides words in response that are likely to be associated with the inputted words. In other words, it essentially formulates answers based on word-association statistics or language probabilities, rather than intellect, rationale or understanding. It is predictive text on a large scale.

The GPT acronym in ChatGPT refers to Generative Pre-trained Transformer.

  • Pre-training is generally broken into two components: supervised and non-supervised. 'Supervised' is where the model is trained on a labelled dataset, with each input being associated with a corresponding output. 'Non-supervised' is where the model is trained on data where there is no specific output associated with each input. Through non-supervised training, the model is able to identify syntax and semantics of natural language and gives the system its limitless knowledge appearance.
  • Transformer refers to the architecture used to process the data, designed to be similar to the way our brains process language. It processes sequences of words to weigh the importance of different words when making predictions. The transformer looks at the words in the sequence to compute context and the relationships between words.
  • Generative simply means that it generates results (rather than original or creative content), from the data on which it has trained, using the transformer architecture to weigh the importance in the text in order to compute the 'most likely' sequential words.

Bard

Bard is Alphabet Inc's version of ChatGPT. It is also LLM AI but differs from ChatGPT because it continuously pulls information from the internet. Rather than being pre-trained on a large set of fixed data, Bard searches the internet to provide a response to a prompt, in a more conversational way than typical 'search' results.

Bard is considered a far more powerful language model than ChatGPT, because its dataset does not have a cut-off date. However, this does not make Bard more accurate. It just means Bard's dataset is far more recent and is trained on a dataset consisting of Common Crawl data, Wikipedia, Q&A websites, tutorials, English and non-English web documents and public dialogue from public forums.

Use cases

Businesses are already using AI systems like ChatGPT to, for example, write letters, handle basic customer complaints, draft marketing copy or posts for social media, detect fraud, summarise long documents, and identify common clauses in multiple contracts.

ChatGPT allows users to input an email or an article they have written, with a prompt to re-write the content. Users can even request ChatGPT to change the tone of the writing to suit a particular audience.

Concerns

ChatGPT and Bard are proficient at constructing sentences that mostly make sense. However, the accuracy in certain cases can be questionable. Both systems have received criticism over their “hallucinations” or "hallu-citations" where they provide a sequence of words that appear to make sense but are completely false and misleading - as learned by the US lawyer who in 2023 made submissions to the court using fake case law.

This propensity for falsity and a lack of ability to decipher right from wrong is concerning (albeit, not dissimilar to some humans). However, there are other significant concerns that are seeing thought leaders call for a pause on AI's development due to its potential unknown risks, including existential risks for humanity. Though, the more obvious risks very much in the here and now include copyright, privacy and a range of ethical concerns that are particularly relevant to the legal profession.

Copyright

The large data sets upon which chatbots are trained gives rise to legitimate concerns about AI's potential to infringe upon people's copyright, particularly given the lack of transparency as to sources used to train the AI and generate responses. It may be that copyright material is being used without a licence, without a royalty payment and without attribution. OpenAI has openly stated that if it was forced to pay for copyright upon which to train its system, it would go out of business.

Using chatbots at work may also prevent businesses from generating their own copyright materials if there is insufficient human intervention. For businesses that rely on generating revenue from IP ownership, the adoption of AI in its creation of so-called IP might sound the death knell for IP-derived royalties.

Privacy

Due to the sudden rise in popularity of various AI systems, users are sharing vast amounts of personal data without properly understanding the associated risks. Open AI stores the data of the users and, in its terms and conditions, states that data will be used for conducting research and improving its services. It is understood that users are providing these systems with personal information such as job history and medical information to help write resumes or seek general medical advice without fully understanding how that information may be used. Australia's privacy laws purposefully limit how much personal information can be legitimately collected, and proposed new principles around fair and reasonable collection and use may tighten those restrictions, irrespective of consent.

Ethics

Prominent figure and AI developer, Geoffrey Hinton, left Google over concerns about the potential dangers of AI development and the risk of bad actors misusing AI chatbots against humans. Hinton's resignation has many experts concerned about the rapid growth of AI technologies and calling for government intervention to provide adequate guardrails on its development.

Many other leaders in the AI space, including Elon Musk, have also urged for the pause on its development, or at least its deployment, in fear that it may cause more harm than good. Experts have said, without proper governance, we are at risk of "corporate irresponsibility", particularly regarding spreading misinformation with speed and efficiency.

A challenging proposition for the legal profession is the possibility that AI is the perfect solution for a post-truth world. For the rule of law and a legal profession that has at its heart, the concept of truth, the implications are indeed existential. In this sense, a critical threshold question is whether AI (in its current form) is compatible with the legal profession and our legal system.

The legal system has given courts, judges and juries the power and methods to be the arbiters of truth, to determine on the balance of probabilities whether someone is liable for negligence, or whether it is beyond reasonable doubt that someone is guilty of murder. At what point do lawyers and law makers - in practicality - abdicate our responsibility and our capability to make decisions for the sake of efficiency? These are fundamental questions for lawyers in the current AI-adoption Age. Other examples that lawyers must grapple with are found in the professional rules of conduct: competence, confidentiality and independence.

  • Competence is rule 4 of the Legal Profession Uniform Law Australian Solicitor Conduct Rules, which stipulates that a solicitor must deliver legal services competently, diligently and as promptly as reasonably possible. There is an argument that lawyers have a duty to be competent not only in the law but also the technology used to deliver legal advice. However, if the underlying technology produces information that is erroneous or false, and a process is not in place to verify output, competence very quickly falls away both in terms of the law and the use of technology to diligently and promptly provide legal services.
  • Confidentiality, enshrined in rule 9, prohibits lawyers from disclosing information that is confidential to a client and acquired by a solicitor in the client's engagement. Naturally, as the introduction of technology has become more widespread, such as the use of mobile phones, cloud storage and public wi-fi, lawyers must be far more vigilant when it comes to preserving client confidentiality. As too with the introduction of AI technology, there are several potential risks to clients' confidentiality. For AI to be able to draft a letter of advice or review material, the system would need access to the information. Once confidentiality is lost, it cannot be re-gained. The same can be said of trust, on which lawyers as 'trusted legal advisers' depend for their existence.
  • Independence is defined in rule 4.1.4, which requires judgment of lawyers to be free from external pressure. Typically, this is viewed in the context of other lawyers. However, in the context of AI, lawyers who outsource the majority of their work and are over-reliant on AI programs may not be exercising independent professional judgement.

While the transformative potential of AI remains exciting and exhilarating, even for an ancient legal profession (no lawyer ever wants to be Dennis Denuto from Australian cinema classic The Castle and type their own dictation or draft their own contribution-fault clause completely from scratch), it is clear that governance is essential. This is why there has been an acceleration of government action and initiatives worldwide on AI.

In June 2023, the Australian Government released its discussion paper on Safe and Responsible AI in Australia, the European Parliament passed its proposed AI Act, and the White House recently published a Blueprint for an AI Bill of Rights.

Given the ACCC's history of taking on the digital giants, it is not beyond the realms of possibility for Australia to be on the cusp of another world-first, perhaps in the form of a new AI bargaining code. Such a code might not only see the new digital goliaths pay for the content on which their AI is trained but ensure that accountability for output lies with those empowered to control it.

While the final regulatory evolution is still unfolding, if there is one certainty in Australia, it is that the current ACCC Commissioner, Gina Cass-Gottlieb, has one very tough act to follow.

This article has been adapted from a paper presented by Lisa Fitzgerald at ChilliIQ events in Melbourne and Sydney in 2023, entitled 2030: The Future of Technology and the Legal Profession

All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.