At the same time, the AI landscape is shifting rapidly thanks to technological advancements. The most recent development arresting attention: generative AI. Capable of creating new content, rather than just delivering insights into already existing data, many expect it to transform how business is done. Meta has announced its Make-A-Video tool, which turns textual prompts into short video clips, while OpenAI's Dall-E 2, Google's Imagen and start-up Midjourney can all generate original images, with headline-grabbing results. With tech giant Microsoft announcing this year that it will make a further investment into ChatGPT-maker OpenAI, in a multiyear deal reported to be worth up to $10 billion, the race to secure stakes in potent AI platforms is certainly heating up.
Ethical dilemmas
As application of AI-style tools spreads rapidly across industries, concerns have inevitably been raised about such systems' ability to detrimentally – and unpredictably – affect someone's fortunes. "There's an increasing appreciation among regulators and businesses about the potential human rights impacts of AI and related technologies," notes Hong Kong-based HSF partner Antony Crockett, who specialises in complex disputes and the interplay between business and human rights. "It's a growing awareness that is helping identify the risks, but we haven't yet moved into a period where there's consensus about what to do about those risks." These risks are wide-ranging and hard to foresee. Often, the same tools used for benign purposes can be deployed for malign intentions. Facial recognition may be the most illustrative example the same technology for applying humorous filters on SnapChat or Instagram can be used by oppressive regimes to restrict citizens' rights, and result in individuals being imprisoned for crimes they did not commit. In short, risks are not only borne from the technology, but from its application.
READ MORE +
Filling the gap
A prominent example is the Australian Government's Robodebt scheme, which used an unsophisticated AI algorithm that automatically, and in a significant number of cases erroneously, sent debt notices to welfare recipients who it determined had received overpayments. Intended as a cost saving exercise, the persistent attempts to recover debts not owed, or incorrectly calculated, led many to raise concerns over the impact the scheme had on the physical and mental health of debt notice recipients. Far from achieving its goal, the programme was scrapped in 2020 with the government since paying billions to refund wrongly issued debts and settle class-action lawsuits. Now, the administration is facing a fresh public enquiry into the scheme. But the most significant impact may be the potential human cost. In the aftermath, global media asserted the programme was linked to a number of suicides, a point strenuously denied by government representatives. Further complicating matters is that AI systems tend to be ‘black box’ by design, making it harder to diagnose the root cause of issues. Models might conceal how input leads to output, or infer patterns that are unclear or incomprehensible to humans. This can make it very difficult to explain to individuals how or why an AI tool led to a certain outcome, let alone in terms that can be readily understood. Without this transparency, the ability to identify and challenge outcomes is diminished, and any route to redress effectively withdrawn.
FILLING THE GAP
An additional complication is that in many jurisdictions such risks are not addressed by a single AI-related law or regulation. They are instead subject to a patchwork of existing laws covering areas like employment, human rights, discrimination, data security and data privacy. While none of these specifically target AI, they can still be used to address its risks in the short to medium term. Take the European Union's (EU's) General Data Protection Regulation (GDPR). "GDPR contains provisions that are used to target AI, including Articles 13 to 15: the right to meaningful information on the logic involved in certain types of automated processing," says Fountain Court barrister Jacob Turner. "This has been interpreted as the right to an explanation of what AI does. Article 22 also covers the right not to be subject to automated decision making for decisions which have a legal or similarly significant effect. All of these provisions have been invoked in the last couple of years by parties unhappy with algorithmic decision making." But these laws are not, by themselves, enough. "There are a number of risks that fall outside of existing law and regulation," comments Jaffe. "Ethics steps in to fill these gaps. While lawmakers wrestle with the far-reaching ramifications of AI, other groups such as industry bodies and regulators are driving the adoption of guidance, standards and frameworks. Some of these might become standard industry practice even without the enforcement of law."
ENTER THE METAVERSE
THE CORPORATE PIONEERS
THE EU'S AI ACT – SETTING THE STANDARD
Given the dependency AI has on data, it comes as no surprise that the industries making the most use of the technology today are those already processing data in vast quantities. "These include finance, healthcare, retail, social media and marketing," says Sydney-based Herbert Smith Freehills (HSF) technology solicitor Raymond Sun. "Businesses in these industries typically collect huge volumes of customer or user data. AI allows them to analyse that data, generate actionable insights, and even make otherwise impossible connections and predictions more quickly and cheaply than before." With the amount of data generated in the world growing exponentially, it is only a matter of time before other industries follow suit. The Covid-19 pandemic accelerated adoption of AI tools and automation among businesses, with wild fluctuations in demand and chronic labour shortages driving up use in the manufacturing, retail and hospitality industries, in particular. This growth is not limited to the private sector, either. Governments are increasingly using these tools to address challenges in delivering public and other services, from identifying tax evasion to prioritising the allocation of welfare assistance.
One illustration is the US' National Institute of Standards and Technology's (NIST's) AI risk management framework. According to NIST, the framework is intended "for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems." And while this framework is at second draft stage, the International Organization for Standardization joint technical committee for AI is similarly working on adding to its 16 non-binding standards with another 25 that are yet to be published. The current focus of many of these initiatives surrounding the ethical use of AI is squarely on fairness. Bias is one particularly important element. The algorithms at the centre of AI decision making may not be human, but they can still imbibe the prejudices which hue human judgement. As Sun notes: "Take an AI tool that decides whether an individual is approved for a bank loan. If that tool is trained using data about past approvals carried out by humans whose decisions had been biased against a particular demographic, whether or not intentionally, then those biases will be imported into the training data. That, in turn, will result in the tool delivering unfair outcomes for individuals within that demographic, for example by unjustly rejecting their application for a loan." Policymakers in the EU are alive to this risk. The bloc's draft EU Artificial Intelligence Act (see box) addressed a range of issues on algorithmic bias, arguing technology should be developed to avoid repeating “historical patterns of discrimination” against minority groups, particularly in contexts such as recruitment and finance. It is expected many other jurisdictions will look to tackle this issue head-on in future AI laws, even if views on how to balance regulation and innovation differ widely.
OUT OF THE BOX
This dilemma goes to the heart of machine learning, algorithmic decision making, and almost all forms of advanced automation. These tools could end up enhancing and shaping much of our commercial future, so it is vital they move in-step with values of fairness and strive to avoid entrenching bias. Atlassian Director of Regulatory Affairs and Ethics Anna Jaffe summarises the moment: "It's an emerging space. Our current regulatory and legal frameworks for tackling AI are likely to change significantly in light of what's on the horizon. What we're seeing, though, is a patchwork of solutions that include laws and regulations in various jurisdictions, as well as guidance on how to responsibly implement new technologies that doesn't have the force of law. It's all fluid and changing quickly." In this third instalment of our TechQuake series, we look at how growing ethical concerns are driving the push towards greater regulation and ask what the future holds for a technology that is set to refashion the world.
READ LESS +
Facebook (Meta): in a bid to expand beyond its core social media business, the US company in October 2021 changed its corporate name to Meta and unveiled plans to recruit 10,000 staff to develop its metaverse. Microsoft: in January 2022, the software giant acquired computer games business Activision Blizzard for $68.7bn in a move widely viewed as a metaverse play, and has been investing heavily in making virtual reality (VR) cheaper and more accessible. JPMorgan Chase: The US banking giant opened its Onyx Lounge in the Decentraland metaverse platform in 2022, allowing it to operate a branch in the virtual world, alongside publishing a paper on the metaverse noting that "opportunities presented by interactive, digital worlds seem limitless." Samsung: In January opened Samsung 837X, an immersive virtual world modelled on the company's Samsung 837 flagship New York store, with the company describing the experience as "a journey where technology joins art, fashion, music and sustainability." Coca-Cola: launched over 4000 digital collectibles on the metaverse since 2021, owners of which can enjoy real-life perks and experiences sponsored by Coke Studio and early access to new Coca-Cola products launched by Coca-Cola Creations. Selfridges: teamed up with Paco Rabanne and Fondation Vasarely to open its first department store in the metaverse, which used NFTs to inform visitors about key events and projects in Selfridges' history. Fortnite x Balenciaga: In 2021, Epic games launched a series of in-game Balenciaga apparel for video game characters in Fortnite, made using 3D scans of real-life garments. Balenciaga launched accompanying Fortnite x Balenciaga merchandise in the physical world. Nike: opened a virtual store – Nikeland – in the metaverse, and in January 2022 claimed to have received just under 7 million visitors in less than six months. Also acquired RTFKT, a virtual goods company that specializes in creating digital sneakers. Hyundai: launched "Hyundai Mobility Adventure" in the metaverse in 2021, which aims to familiarise young and tech-savvy consumers with Hyundai's futuristic concept vehicles and hydrogen fuel-cell technology. Ferrari: launched a digital model of the real-life Ferrari 296 GTB in the metaverse in 2021, as well as signing a multi-year deal with Swiss tech company Velas to create NFTs for its metaverse audience. Government of Indonesia: in 2022, state-owned telecommunications company PT Telkom Indonesia launched its own metaverse "metaNesia" with a view to helping small businesses compete against international technology giants.
ANTONY CROCKETT PARTNER, HONG KONG
Hollywood has colourful notions about artificial intelligence (AI). The popular image is a future where robot armies spontaneously turn to malevolence, pitching humanity in a battle against extinction. Among scholars and technologists focusing on the sector, such visions are derisively dubbed, 'The Terminator Scenario'. In reality, the risks posed by AI today are more insidious and harder to unpick, often a by-product of the technology's seemingly endless application in modern society, as opposed to the possibility that computers will one day seize control of our lives. But it is unsurprising given the scope of its potential that AI generates so much debate, not least in how we can build regulatory safeguards to ensure we master the technology, rather than surrender control to systems and diminish the accountability on which society rests.
RAY SUN SOLICITOR, SYDNEY
Cage against the machine – The law and AI
© Herbert Smith Freehills 2023 Modern Slavery and Human Trafficking Statement | Accessibility | Legal and Regulatory | Privacy Policy | Report Fraud | Whistleblowing
In the latest of our TechQuake series, we explore how emerging regulation will tackle ethical dilemmas as artificial intelligence spreads across borders and industries
The regulatory landscape is in flux as governments and industry bodies compete for leadership and ethical concerns grow. The EU has bolstered its reputation as a pioneer, but China and the US will have significant interventions to make as key legislation continues to come into force globally. The next five years will likely mark an implementation phase in which soft guidance morphs into hard law, potentially building on progress already made through the Organisation for Economic Cooperation and Development's AI principles and UNESCO Recommendation on the Ethics of AI. But many observers expect it to be much longer before something like a comprehensive global framework for AI emerges. "Many countries will still be in the experimental phase over the next five years and in that time period, there could be events which accelerate regulation," reflects Sun. "This is true of tech generally, as seen with recent crypto events which have brought it into sharp focus for regulators worldwide. Overall, I expect the next five years to be more experimental, and within 10 years we will start to see regulation that can stand the test of time."
ALREADY HERE
Many possibilities, many challenges – Applying law in the Metaverse
TYPES, COMPONENTS AND APPLICATIONS +
GET HSF INSIGHTS DIRECT TO YOUR INBOX
SUBSCRIBE +
There's an increasing appreciation among regulators and businesses about the potential human rights impacts of AI but we haven't yet moved into a consensus about what to do about those risks."
For many businesses, it's a case of dipping a toe in early and learning rather than relying on speculation. This helps inform long-term strategies not just for the metaverse, but for the technologies it comprises and puts those businesses in prime position to evolve if the time comes.
© Herbert Smith Freehills 2022 Modern Slavery and Human Trafficking Statement | Accessibility | Legal and Regulatory | Privacy Policy | Report Fraud | Whistleblowing
TECHQUAKE
There is no single definition of AI. The term is often used to describe technologies capable of carrying out a broad range of data-driven tasks, from relatively simple analysis tools to complex neural networks that mimic the human brain. However, despite the tendency to label every computer programme processing data as AI, some argue the definition only extends to machine learning systems that are trained to learn from data and make predictions without being explicitly programmed to do so. Though humans excel at making sense of information across multiple domains, AI is already incomparably better when it comes to narrower applications, and its capability is growing at a startling pace. This has driven a marked increase in the number of businesses using the technology, but in turn results in a paradox. As Jaffe notes: "AI is simultaneously more pervasive than people believe, yet frequently misattributed to tools that are, in fact, not AI in the technical sense."
As much as some in the industry will chafe at intrusive oversight from policymakers, as individuals' appreciation of the ethical implications of the technology expand alongside its application, it is hard to see how businesses can retain public confidence without robust and considered AI regulation in place. In the meantime, discrimination and bias will continue to command attention in demonstrating the most immediate risks of this technology being applied not only with ill intent, but merely a lack of diligence around unintended consequences. But such factors are ultimately just pieces of a much larger puzzle. Industry, regulators and professional advisers face years of piecing together the full picture of legal and ethical risks if we want to remain the master of this technology, and not the other way around.
THE RACE TO REGULATE
"There is no one right balance," says Sun. "It will depend on each country's economy, industry and values, among other factors." The activist approach taken by the EU to AI regulation, however, has led many to the view that they are leading the pack by some distance. In 2021, the bloc set out detailed plans that included banning uses of AI that violate individual’s fundamental rights and imposing strict rules on tools posing a “high risk" of harm to the rights or wellbeing of humans. The European Commission further supplemented the draft Act in September 2022, with proposals for an AI Liability Directive and revised Product Liability Directive that would streamline compensation claims where individuals suffer AI-related damage, including discrimination. "The EU is putting the rights of its citizens at its centre," says Turner. "It's philosophically different from the laissez-faire approach to technology and regulation that is more typically adopted in the US." 'However, while the EU may have staked out the clearest ground among policymakers, others are jostling for position. "China is competing for global leadership in AI regulation," adds Turner. "Since releasing its AI Development Plan in 2017, the government has talked about international discourse power – the Chinese equivalent of soft power. They aim to be the world leader in AI regulation by about 2025, having stolen a march with their algorithmic recommendation regulations." Responding to the emergence of generative AI systems, China passed a regulation in December 2022 to ban AI-generated images unless they include a watermark or other label clearly identifying their origins. Interestingly, there is some common ground between Chinese and EU legislators on how AI should be policed, with both targeting prescriptive regimes. But while the EU's approach anchors on a key central AI regulation, China is moving towards a network of regulations that each cover certain applications and sectors. Meanwhile, the UK is hoping to strike a deft balance between this more prescriptive stance and the lighter touch approach proposed by the US.
Yet a fraught period for policymaking in the post-Brexit era – with charged debates pitching deregulation against interventionalist initiatives – is seen by some as making it hard to stake out such ground. "It's unclear where the UK wants to go," says Turner. "The desire to be a global leader in AI regulation hasn't really come through, partly due to the inherent tension between deregulating following Brexit and bringing other countries along with us by creating UK regulations." There are, however, some signs of the UK seeking global leadership. The Information Commissioner's Office (ICO) recently fined software business Clearview AI £7.5 million and issued an enforcement notice demanding it complies with UK data protection law after the company scraped online images of individuals into a global database for its controversial facial recognition tool. Clearview has since launched an appeal, but the ICO's action sends a clear message: UK regulators are willing to use existing laws and frameworks to address the risks of AI where necessary. The case also underlines the increasing focus from watchdogs on protecting use of even publicly available biometric data. "Whether or not using AI, discrimination is still illegal; breaching privacy is still illegal," notes Jaffe. "The issue may be less about a lack of applicable law, and more about a lack of enforcement action, driven in part by a knowledge gap when considering how those laws might apply in the context of AI." This adds to the challenge for those who wish to enshrine fairness as a guiding principle of our increasingly tech-reliant society. With the capability of AI technology expanding dramatically, policymakers will need to find a way to keep pace.
Ray Sun SOLICITOR, SYDNEY
Many countries will still be in the experimental phase over the next five years. Within 10 years we will start to see AI regulation that can stand the test of time."
What the metaverse stands for is a virtual world where you have an ability to have a singular presence which moves from place to place."
Antony Crockett
Given the dependency AI has on data, it comes as no surprise that the industries making the most use of the technology today are those already processing data in vast quantities. "These include finance, healthcare, retail, social media and marketing," says Sydney-based Herbert Smith Freehills (HSF) technology solicitor Raymond Sun. "Businesses in these industries typically collect huge volumes of customer or user data. AI allows them to analyse that data, generate actionable insights, and even make otherwise impossible connections and predictions more quickly and cheaply than before." With the amount of data generated in the world growing exponentially, it is only a matter of time before other industries follow suit. The Covid-19 pandemic accelerated adoption of AI tools and automation among businesses, with wild fluctuations in demand and chronic labour shortages driving up use in the manufacturing, retail and hospitality industries, in particular. This growth is not limited to the private sector, either. Governments are increasingly using these tools to address challenges in delivering public and other services, from identifying tax-evasion to prioritising the allocation of welfare assistance.
As application of AI-style tools spreads rapidly across industries, concerns have inevitably been raised about such systems' ability to detrimentally – and unpredictably – affect someone's fortunes. "There's an increasing appreciation among regulators and businesses about the potential human rights impacts of AI and related technologies," notes Hong Kong-based HSF partner Antony Crockett, who specialises in complex disputes and the interplay between business and human rights. "It's a growing awareness that is helping identify the risks, but we haven't yet moved into a period where there's consensus about what to do about those risks." These risks are various and hard to foresee. Often, the same tools used for benign purposes can be deployed for malign intentions. Facial recognition may be the most illustrative example; the same technology for applying humorous filters on SnapChat or Instagram can be used by oppressive regimes to restrict citizens' rights, and result in individuals being imprisoned for crimes they did not commit. In short, risks are not only borne from the technology, but from its application. A prominent example is the Australian Government's Robodebt scheme, which used an unsophisticated AI algorithm that automatically, and in many cases erroneously, sent debt notices to welfare recipients who it determined had received overpayments. Intended as a cost saving exercise, the persistent attempts to recover debts not owed, or incorrectly calculated, led many to raise concerns over the impact the scheme had on the physical and mental health of debt notice recipients.
ANNA JAFFE Atlassian Director of Regulatory Affairs and Ethics
"There are a number of risks that fall outside of existing law and regulation. Ethics steps in to fill these gaps."
CREATIVE DIFFERENCES – INTELLECTUAL PROPERTY LAW AND AI
BACK TO ARTICLE
Many possibilities, many challenges – Applying law in the Metaverse +
TYPES
COMPONENTS
APPLICATIONS
Artificial Intelligence
Narrow AI, which covers applications that perform a specific task and cannot perform other tasks beyond the purpose for which they are built. All AI applications today fall into this category, no matter how intelligent or powerful they seem. General AI (also known as Artificial General Intelligence) refers to AI applications that can think and perform any task as well as a human. There are currently no general AI systems, but a large amount of ongoing research is focused on this field.
Artificial intelligence (AI) is an umbrella term that describes a class of machines (especially computer systems) capable of performing tasks commonly thought to require human intelligence. There is no common definition of AI, but it is generally divided into two categories:
Machine learning (ML) is a branch of AI in which systems detect patterns (ie, learn) directly from data and then apply this 'learning' to produce an output, without being explicitly programmed to do so. ML allows machines to continuously learn from data they ingest and self-improve the quality of their outputs. It can be implemented using various methods, including:
Supervised learning, where systems learn to map relationships between labelled data. Unsupervised learning, where systems learn to detect relationships between unlabelled data. Deep learning, where systems learn to recognise hidden patterns through ‘neural networks’ that mimic how the human brain operates.
MACHINE LEARNING
Structured, which is data that exists in an organised way, for example in rows and columns, or that has been labelled. Unstructured, which is data that exists in a disorganised and unsystematic way, for example the content of videos, audio and emails.
The ‘fuel’. Data is a collection of discreet values that convey information, such as facts and statistics. All types of AI system process data to find patterns or rules, and generate an output based on them. The type, format and volume of data needed depends on the purpose and capability of the AI system. Data exists in two forms:
DATA
The ‘engine’. Algorithms are a series of steps and instructions that a machine follows to complete a task. Like any machine or computer application, AI systems use a collection of algorithms to:
Interpret input data. Find patterns or rules within that input data. Generate an output based on those patterns or rules.
ALGORITHM
The end product. The output of an AI system will depend on how it processes the input data. Types of output include:
Predictions, such as forecasting future stock prices based on historical examples. Classifications, such as identifying objects in an image. Content, such as AI-generated artwork.
OUTPUT
Financial products, including to facilitate credit scoring and loan approvals. Customer service, including by deploying chatbot assistants. Regulatory compliance, including automating know-your-client checks. Crime prevention, including detecting fraudulent transactions.
Financial institutions are using AI for:
FINANCE
Personalisation, including in the context of offers, recommendations and communications. Customer experience, including self-service and streamlining checkout procedures. Customer service, including chatbots for processing queries, complaints and returns. Stock management, including optimising site layout and optimising inventory and fulfilment.
Retail businesses (online and brick-and-mortar) are using AI for:
CONSUMER
Content creation, including facilitating original image, video and other content generation. Personalisation, including targeting advertising and other content to specific users. Content moderation, including automatically identifying and removing non-compliant content. Task automation, including content scheduling and posting.
Social media companies (and marketing businesses) are using AI for:
SOCIAL MEDIA
Data management, including collecting, classifying and tracing medical data. Medical diagnosis, including aiding screening tests for cancer. Predictive analytics, including predicting when and where care will be needed. Home labs, including to enable instant blood and at-home rapid testing.
Healthcare companies are using AI for:
HEALTHCARE
Predictive analytics, including predicting wear and scheduling maintenance. Task automation, including automating production lines. Issue detection, including identifying machine and kitting defects. Demand forecasting, including predicting and staffing based on future demand for products.
Manufacturing businesses are using AI for:
MANUFACTURING
Autonomous vehicles, including to support partial and fully automated driving functionality. Journey routing, including detection and provision of real-time traffic information. Issue detection, including monitoring and reporting on the condition of roads. Predictive analytics, including predicting and scheduling around delays in aviation.
Transport businesses are using AI for:
TRANSPORT
Deep learning is a subset of machine learning that uses an algorithm called artificial neural networks, which is a web of nodes that work together to find hidden patterns between variables and produce an output in a way that mimics the operation of the human brain. Deep learning networks can be designed to consider many variables, making it useful for dealing with large quantities of data, though precisely how networks function is rarely known. Networks include:
Convolutional neural networks, which are used in computer vision and for other visually intensive outputs. Recurrent neural networks, which are used in speech recognition and stock market predictions. Transformers, which are used in natural language processing. Generative adversarial neural network, which are used to create art.
DEEP LEARNING
The EU's AI Act – Setting the standard
Global regulators are jostling to become the predominant rule-maker for AI but Brussels remains ahead of the competition
Providers of an AI system, whether or not based in the EU, if they place the AI system on the market or put it into service in the EU, or if the output of their system is used in the EU. Individuals, public or private entities using these AI systems in the EU, except where the AI system is used in the course of a personal non-professional activity.
PABLO GARCÍA MEXÍA, HEAD OF DIGITAL LAW - MADRID
digital assets
Virtual worlds will be subject to many of the same issues we see in today's internet. The problem is the metaverse involves emerging technologies that are still being grappled with on a global scale by businesses and politicians and there will only be more of that to come as we see more advanced tech move into mainstream".
Immersive experiences create more compelling narratives. Greater community engagement and gamification in novel contexts will also generate new business opportunities."
Susannah Wilkinson, Digital Law Lead (Australia & Asia)
HOME
"The EU is typically at the front of the pack when it comes to comprehensive regulation," says Atlassian Director of Regulatory Affairs and Ethics, Anna Jaffe. "They have broad-based AI regulations in the pipeline. Although the EU law and policy-making processes can take time, it's also fairly straightforward to follow these proposals as they progress." Unsurprising, then, that the bloc was quick off the mark with its AI regulatory proposals, tabled in spring 2021. AI is broadly defined in the draft, and the assessment of whether a piece of software is covered will be based on characteristics such as its ability to generate outputs in response to a set of given human defined objectives which influence the physical or digital environments which the system interacts with. Moreover, the EU is casting a wide net, with the regulation set to apply to:
The regulation itself rests upon a risk-based analysis, with the EU grouping AI systems into four categories:
Unacceptable risk: systems that are considered a clear threat to the safety, livelihoods and rights of people, for example those that manipulate human behaviour through subliminal techniques or subvert the free will of the user. These receive an outright ban. Contravening this rule may result in a fine of up to €30 million or 6% of total worldwide annual turnover, whichever is highest.
The Act is currently going through the EU's legislative process and could come into force in 2023 or 2024 in a transitional period. Given the way EU measures like GDPR have influenced regulation abroad, the Act will likely mark the first major step towards a global regulatory standard on AI. The race to regulate is on, and the EU has stolen a march.
High risk: systems that could majorly impact citizens' rights or wellbeing, such as those used in critical infrastructure or product safety components, to give access to education or essential services, or for law and justice. These must satisfy strict obligations before being put on the market, including obtaining adequate risk assessments, implementing mitigation systems, and using high quality datasets to minimise risks and discriminatory outcomes. Low risk: systems with specific transparency obligations, such as chatbots. In this case, when users are interacting with chatbots, the user should be made aware that the chatbot is powered by an AI algorithm so they can make an informed decision whether to continue. Minimal risk: systems in this category will have no restrictions. These include AI-enabled video games or spam filters, for example. The vast majority of AI system currently used in the EU would fall into this category.
The EU has long been lauded as a global trendsetter on tech regulation. GDPR, the Digital Markets Act and the Digital Services Act all serve as a reminder that the bloc is often an early mover with oversight of high-impact technology.
In the absence of tailored AI regulation, the complex legal issues thrown up are currently tackled via a web of nation laws spanning a range of activity. We break down the key areas and risks
Given the rapid development of AI and its growing role in decision making, it is inevitable that some AI systems will fail to perform properly, resulting in loss or damage. This is amplified by the growing combination of software-based AI systems with hardware and robotics, such as drones and driverless cars, where failure could result in death or serious injury to persons or property. Additionally, AI systems may not fail but may have unintended consequences even when used as directed. A central challenge in case of loss or damage arising from use of an AI system is establishing who is ultimately liable, and to what extent that liability is shared between parties. It is likely that multiple parties will be involved in developing and operating an AI system, from designers and programmers to data providers and the owner – as well as the end user themselves, who may not have followed instructions, or may have deployed the AI system beyond its intended use. Jurisdictions will have different rules for addressing liability, depending on the nature of the liability and damage. Outside of specific legislation, redress for victims who have suffered due to the failure of an AI system may be established through a potentially complex web of contractual terms between the various parties involved, requiring the unpicking of warranties and indemnities, as well as limitations on liability. Certain jurisdictions, like the UK, may offer further redress in the form of the tort of negligence. However, this could be hampered where no duty of care can be established, or if the resulting damage was wholly unforeseeable. Additionally, there could be challenges with the application of relevant laws in relation to excluding liability, for example, for personal injury or death.
Transparency & Explainability
George McCubbin SENIOR ASSOCIATE (AUSTRALIA), LONDON
If brands are thinking about entering the metaverse (as many already have), or promoting and selling NFTs, the terms under which any proprietary content is licensed or sold need to be carefully considered"
Giulia Maienza ASSOCIATE, MILAN
Rachel Montagnon Professional support consultant, London
Andrew Moir GLOBAL HEAD GLOBAL HEAD OF CYBER & DATA SECURITY, LONDON
Discrimination
The potential for bias to affect human decision making is well documented. Machines are often assumed to reach more objective and 'fairer' decisions, but the reality is that AI is not immune from biased decisions. Machine-based bias can arise from various factors but ultimately boils down to algorithms only being as impartial as the humans programming them and the datasets across which they are trained. The bias may present itself in a number of ways. For example, a system may be inadvertently programmed to treat individuals differently because of characteristics they have. Alternatively, algorithms may fail to understand the nuances sitting behind different fact patterns. Finally, the data on which algorithms are trained can impact the outcome by perpetuating historical biases or due to overrepresentation of certain characteristics or groups in the data. In many legal systems, particularly throughout Europe, certain characteristics, for example, age, disability, race, religion and sex, are protected by law. As a general proposition, the different legal systems make it unlawful to treat an individual detrimentally due to them possessing one or more of the relevant protected characteristics. Much like human judgements, if decisions made because of the application of AI unlawfully discriminate against any individual or group, it will not often be the case that the user of the AI system can exempt themselves from liability as a result of relying on the AI to reach their decision.
Intellectual property
Human decisions may not always be transparent but typically some explanation can be offered by the decision maker – even if their reasoning may be considered by some to be unsatisfactory. But some AI systems are so complex that the way they reach a decision cannot readily be articulated. Such 'black box' systems include deep learning models, which use neural networks to recognise hidden patterns between variables by mimicking the operation of the human brain but – by design – do not reveal how input leads to output. This can limit our ability to understand and predict the behaviour of these AI systems. It can also make it more difficult to find and fix problems or explain to impacted individuals why an outcome has arisen – for example, where an AI system has determined an individual is ineligible for a loan and the individual wishes to understand and contest this decision. Additionally, it may be more difficult for an organisation to explain to an individual where the developer or seller of AI systems considers information about the algorithm or the way in which a decision is reached to be proprietary. In some jurisdictions, a lack of transparency may infringe legal rights. For example, in the European Union, the General Data Protection Regulation (GDPR) makes it a requirement to provide affected individuals with "meaningful information about the logic involved" in AI systems, including how any decision making operates, so that individual can challenge any output or decision.
Data Protection & Privacy
Beyond transparency, data protection law provides a broader set of rules that must be observed when implementing AI systems that process personal data. These rules make it a requirement to have a lawful basis for processing such data – which in some circumstances may require the express consent of affected individuals or for the processing to be necessary in the performance of a contract with that individual – and to process that data only for the purpose it was collected. There are likely additional rules if the AI system processes sensitive personal data. For example, it is unlikely the operator of an AI system for identifying the best candidates for a recruitment exercise would be allowed to train that system on historic personal data collected from social media. Laws will differ between jurisdictions but other requirements likely include limiting the personal data collected and retaining only that personal data which is strictly necessary for the original purposes. Other likely steps include putting limits on the duration for which data is kept and keeping data in a manner that allows it to be updated or deleted upon request. These requirements can be at odds with data-hungry AI systems, so consideration is needed at each stage of their design, training and use.
liability
Intellectual property (IP) issues in the field are numerous, from the use of copyright-protected material to train AI systems and questions about how to protect the valuable assets they created, to concerns about who is liable when they copy work or infringe patents. AI systems are trained on large quantities of material, such as text from articles, websites, books, computer code and artwork. Use of copyright-protected material without permission may constitute infringement and give rise to claims, which in turn may be subject to specific copyright infringement defences that differ between jurisdictions. This could be exacerbated by the black box model mentioned above, making it impossible to know what a system does with the material on which it is trained. In addition, IP rights can be infringed by outputs produced from AI systems, as alleged in a recent class action filed against Microsoft in relation to GitHub Copilot. Even if materials used to train AI are open source, they may be subject to “copyleft” licences requiring derivative works to in turn be licensed on terms no less restrictive. And training AI systems, at scale, on human-made works to produce 'original' artwork raises the ethical question of the role that is left for human expression. Beyond the risk of infringement, there is still a question of how protections apply when an AI system is used as part of the creative and inventive process, or whether it can be a creator or inventor itself. Courts in various jurisdictions have concluded that AI cannot be an "inventor" for the purpose of patent law as currently drafted but are yet to clarify whether an invention made using, or by, an AI system could be patentable. Further, while copyright would be the usual domain for protecting creative outputs like literary works of art, various courts have emphasised the essential nature of human creativity when determining the application and scope of protection.
Biases in human decision making are well documented. Machines are often assumed to reach more objective and 'fairer' decisions but there is a risk AI can produce similarly slanted outcomes. Known as algorithmic bias, this can arise in AI systems for various reasons. A system may be trained on data using historical human biases against individuals or groups, there may be overrepresentation of certain characteristics in the data, or the algorithm could overemphasise certain factors when making its calculations. Some characteristics may be protected by law. The Equality Act in the UK, for example, specifies that age, disability, race, religion and sex are all "protected characteristics". It makes it illegal to treat an individual worse because of these characteristics (direct discrimination) or carry out actions that have a worse impact on individuals who share a particular protected characteristic, even where it is not directly included in the decision making (indirect discrimination). Much like human decisions, if an AI system discriminates illegally against any individual or group on the basis of protected characteristics, those impacted may look to bring a claim against the operator of that system under existing equality and discrimination laws.
Decisions made by humans may not always be transparent but they are typically explicable by the decision maker – even if we consider their reasoning unsatisfactory. But some AI systems are so complex that the way they reach a decision cannot readily be explained. Such 'black box' systems include deep learning models, which use neural networks to recognise hidden patterns between variables by mimicking the operation of the human brain but – by design – do not reveal how input leads to output. This can limit our ability to understand and predict the behaviour of these AI systems. It can make it more difficult to find and fix problems or explain to impacted individuals why an outcome has arisen – for example, where an AI system has determined an individual is ineligible for a loan and the individual wishes to understand and contest this decision. In some jurisdictions, a lack of transparency or explainability may infringe legal rights. In the European Union, the General Data Protection Regulation (GDPR) makes it a requirement to provide affected individuals with "meaningful information about the logic involved" in AI systems, including how any decision making operates, so that individual can challenge any output or decision.
Beyond transparency and explainability, data protection law provides a broader set of rules that must be observed when implementing AI systems that process personal data. These rules make it a requirement to have a lawful basis for processing such data – which may include the express consent of affected individuals, or necessity to perform a contract with that individual – and to process that data only for the purpose it was collected. For example, it is unlikely the operator of an AI system for diagnosing medical conditions would be allowed to train that system on historic personal data collected for a study involving the affected individual, unless the operator obtains that individual's specific consent for such further use. Laws will differ between jurisdictions, but other requirements might include limiting the personal data collected and kept to the volume which is strictly necessary for the stated purposes, putting appropriate limits on the duration for which data is kept, and keeping data in a manner that allows it to be updated or deleted upon request. These requirements can be at odds with data-hungry AI systems, so consideration is needed at each stage of their design, training and use.
Intellectual property (IP) issues in the field are numerous, from the use of copyright-protected material to train AI systems and questions about how to protect the valuable assets they created, to concerns about who is liable when they copy work or infringe patents. AI systems are trained on large quantities of material, such as text from articles, websites, books, computer code and artwork. Use of copyright-protected material without permission may constitute infringement and give rise to claims, which in turn may be subject to specific copyright infringement defences that differ between jurisdictions. This could be exacerbated by the 'black box' issue mentioned above, making it impossible to know what a system does with the material on which it is trained. In addition, IP rights can be infringed by outputs produced from AI systems, as alleged in a recent class action filed against Microsoft in relation to GitHub Copilot. Even if materials used to train AI are open source, they may be subject to “copyleft” licences requiring derivative works to in turn be licensed on terms no less restrictive. And training AI systems, at scale, on human-made works to produce 'original' artwork raises the ethical question of the role that is left for human expression. Beyond the risk of infringement, there is still a question of how protections apply when an AI system is used as part of the creative and inventive process, or whether it can be a creator or inventor itself. Courts in various jurisdictions have concluded that AI cannot be an "inventor" for the purpose of patent law as currently drafted but are yet to clarify whether an invention made using, or by, an AI system could be patentable. Further, while copyright would be the usual domain for protecting "creative" outputs like literary works or art, various courts have emphasised the essential nature of human creativity when determining the application and scope of protection.
Creative differences – Intellectual property law and AI
While far from settled, IP rights play a critical role in policing the use of artificial intelligence and protecting the rights of inventors and creators
Aaron Hayward Senior associate Sydney View profile
The black box nature of AI presents practical challenges for IP owners. It may be more difficult for a copyright owner to establish an AI made unauthorised use of their work, or a patent owner to establish an AI implements a method claimed by their patent.
The enormous investment being made into artificial intelligence (AI) and machine learning (ML) means that investors expect protectable returns. Equally, inventors and creators want to be able to control how the products of their efforts are used by others. These safeguards are generally provided by the intellectual property (IP) system. However, the unique nature of AI means the legal system may need to adapt to provide the protection the key players expect. From the use of copyright-protected material to train AI systems, to concerns about who is liable when AI systems copy others’ work or infringe a patent, jurisdictions worldwide are grappling with the balance to be struck between the benefits AI can bring and the rights of inventors, content creators and consumers.
At the heart of many AI and ML platforms are large quantities of material from which the system learns – for example, text from articles, websites, books and academic papers in the case of OpenAI’s ChatGPT; computer code in the case of GitHub Copilot; or artworks in the case of DALL-E or Midjourney. However, those materials are themselves products of human intellectual effort and often protected by IP rights, such as copyright. Use of copyright-protected material without permission may constitute infringement. Whether use of that material in the AI's development is lawful – including digital reproductions used solely within the AI system – depends on copyright infringement defences that differ between jurisdictions. For example, defences such as fair use in the US, incidental temporary reproductions (Australia) or making of transient or incidental temporary copies (UK or EU) may apply. Such defences can depend on whether the ultimate use – such as the AI's output – is lawful. In the US and UK, rights owners (artists and Getty Images, respectively) have recently launched legal proceedings against Stability AI and others, alleging the use of their artworks to train the AI system constituted copyright infringement.
The UK Government recently consulted on whether exceptions to copyright (and database right) infringement for text and data mining should be introduced. The government initially concluded that it would introduce a new copyright and database exception to allow text and data mining "for any purpose", although rights holders would “still have safeguards to protect their content, including a requirement for lawful access". Such an exception was aimed at freeing legally available material for the use of AI systems, rather than a free-for-all approach, but met with criticism and has now been dropped. Identifying those liable for an AI's use of materials that are not legally available would have been key to ensuring these provisions are effective and did not allow illegitimate use by the back door. The black box nature of AI systems also presents practical hurdles for IP owners seeking to protect their rights. The fact that it is difficult, or impossible, to know what a computer system does with material it is provided for learning means it may be more challenging, for example, for a copyright owner to establish an AI system made unauthorised use of their work, or a patent owner to establish an AI implements a method claimed by their patent.
Apart from use by AI, IP rights can be infringed by outputs produced from computer systems. A recent class action filed against Microsoft in relation to GitHub Copilot alleged, in addition to copyright infringement by the use of code from GitHub to train the system, that asking Copilot to create code achieving particular functionalities would substantially reproduce code Copilot had been trained on. Moreover, even if materials used to train AI are open source, they can be subject to “copyleft” licences, which require that any derivative works are in turn licensed on terms that are no less restrictive. It remains unclear how such licence terms apply to material used to train AI that, in turn, generates a work, rather than material that is directly incorporated into a derivative work. AI systems may also perpetrate or enable patent infringement although, as noted above, the black box nature of AI may cause difficulties for IP owners seeking to enforce their rights. Questions of individual liability also arise, in that any potential liability may be shared or apportioned between developers, users and the AI system itself. Determining who should be liable for infringement enabled by a computer may be difficult, given the challenge in determining the degree of user or developer input into any output or action produced by AI.
The use of AI in the consumer space raises additional issues. Targeted advertising is a prominent tactic employed across the consumer sector that utilises such technologies, and with the advent of virtual and augmented reality and the metaverse, companies have filed patent applications relating to the collection of biometric data for similar purposes. These applications of AI raise issues under privacy and consumer laws that govern how these products are marketed to consumers, how data profiling activities can be lawfully carried out and the resulting data stored, as well as any eventual uses of that data. AI’s use of human intellect also raises ethical questions about the value placed on intellectual endeavours. Indeed, the Australian Government recently announced a review of Australia’s copyright regime, citing the importance of royalties to supporting the arts. However, if an AI system, trained on masses of art from other humans, can produce ‘original’ artworks, this raises the question: what role is left for original human expression?
IP protection encourages investment in creative and inventive outputs. It remains unclear, however, how those safeguards apply when an AI system is used either as part of that process of creation, or indeed whether an AI can be considered the creator or inventor itself. Following attempts by Dr Stephen Thaler to have the AI system known as DABUS registered as the inventor of a patent, courts in the US, Australia, the UK and the European Patent Office each concluded an AI cannot be an inventor for the purposes of patent law as currently drafted. The law as it stands requires a human to be the inventor. However, both the UK and Australian courts emphasised that this was not the same question as whether an invention made using, or by, an AI system could be patentable. In doing so, the question has been raised – but yet not answered – how the increasing use of AI might affect questions such as determining whether a patent for an invention involves an “inventive step”, which is required for it to be valid. Inventiveness in patent law is judged in terms of the step not being "obvious" to the person skilled in the area of technology into which the invention falls. But what is not obvious to a human may not be the same to an AI system (if an AI is able to appreciate the nature of obviousness at all).
Where AI is used to generate creative outputs, like literary works, computer code or art, copyright would be the usual domain for legal protection. However, in Australia, the UK and Europe, the courts have emphasised the essential nature of human creativity, both as to whether copyright subsists at all and on scope of protection it affords. In Australia, the EU and UK, a work must be “original” to attract copyright protection. Although there is no provision requiring the creator be human, the requirement that a work be "original" has been understood as requiring a human’s expression of their free and creative choices. This has led to calls for a specific regime to protect the creative outputs of AI systems. In the UK, there is provision that the owner of copyright in a "computer-generated work" is the person by whom the arrangements necessary for the creation of the work were undertaken. These questions are the subject of government attention worldwide and may lead to further legislative developments. The recent UK Government consultation on AI considered whether there was a need to change domestic IP laws to accommodate computer-devised inventions and creations. Based on submissions received, the government concluded there was no need to change the law at this time. In relation to the copyright protection already provided for computer-generated works, as set out above, UK law already deems these to be authored, and therefore owned, by the person who makes the arrangements necessary for the creation of the work. For computer-devised inventions, most respondents felt AI is not yet advanced enough to invent without human intervention, but could assist the human inventor and, as such, inventions developed using AI are protected by the current law. More flexibility for text and data mining was proposed, however, and the government confirmed that the area of law concerning the protection and policing of AI via IP rights will be kept under review.
In Australia, the High Court recently emphasised the importance of precise characterisation of the invention claimed in determining whether it is patentable. Proposals for a test requiring a computer-implemented invention to be an “advance in computer technology” have been met with criticism, characterising it as a restrictive approach with a chilling effect on innovation, as was said to have resulted from US Supreme Court jurisprudence on this issue. In the UK and EU (and at the European Patent Office) the assessment rests on the technical advance produced by a computer programme – if one can be identified, then the invention may be patentable. The approach under Chinese law is similar, in that AI systems are likely only patentable to the extent that they address a technical problem and obtain a technical effect, either in the processing of data or improving the performance of a computer system in a specific field. As a result, there remains uncertainty in assessing the boundaries of what constitutes patentable computer-implemented inventions, which is likely to persist until further guidance is provided by the courts. Copyright and trade secrets can also provide protection for the source code and technical elements of an AI system, although both have particular requirements for demonstrating subsistence and proving infringement. Database rights can also be helpful in the EU and UK. The considerable investment into the field will result in continuing pressure from those developing AI for suitable protection of that investment. As it has through the history of human advancement, IP worldwide will likely develop to meet this demand, both through the courts and legislation. The stakes are high for both sides. For more on the developing area of IP protection and risks for AI and ML systems, follow our blog series The IP in AI.
Given the plethora of applications to different facets of society, there is an obvious benefit in encouraging investment in new and improved AI systems. IP systems worldwide have evolved to support funding of useful inventions and creativity by the provision of rights to own and protect that investment. Different methods and means of protection have been introduced to support different types of creativity and invention – hence the multiple types of IP rights in existence. Governments are considering how to support this new area of economic development – for example, the UK Government has a National AI Strategy which sets out its aim to "secure the UK’s position amongst the global AI superpowers". The EU, meanwhile, has been progressing its plans for regulation as set out in its proposal for an EU Artificial Intelligence Act, which attempts to limit the risks of AI and which some stakeholders are concerned will choke off investment in the field. However, despite the European Parliament having earlier adopted a resolution on IP rights for the development of AI technologies that acknowledged their importance, the proposals for the Act do not, so far, include proposals on AI and IP.
Anna Vandervliet Senior associate sydney View profile
Byron Turner Solicitor sydney View profile
Rachel Montagnon Professional support consultant london View profile
Giulia Maienza Associate (italy) London View profile
Peng Lei Partner, Kewei mainland china View profile
Alex Wang Patent attorney, kewei mainland china
A recent class action filed against Microsoft in relation to GitHub Copilot alleged that asking Copilot to create code achieving particular functionalities would substantially reproduce code Copilot had been trained on.
There remains uncertainty in assessing the boundaries of what constitutes patentable computer-implemented inventions, which is likely to persist until further guidance is provided by the courts.
Launching GPT-4: Believe the hype?
The latest upgrade to the much-touted generative AI model is primed to attract more headlines and investment
SUSANNAH WILKINSON APAC head of Emerging Technology View profile
The launch of GPT-4 is the latest salvo in the battle to dominate the new field of generative AI."
On 14 March 2023, the Microsoft-backed company OpenAI released the latest version of its language models, GPT-4. The release was highly anticipated following the momentum generated by OpenAI's flagship chatbot application, ChatGPT, which became the fastest growing app to date and spurred a boom in the field of generative artificial intelligence (AI). But what is GPT? How does GPT-4 compare to its predecessors? And what are the opportunities and risks with this new technology?
Standing for Generative Pre-Trained Transformer, GPT refers to a series of large language models developed by OpenAI that can perform various natural language processing tasks (eg, proposing suggested answers to questions, summarising text, and generating lines of code). Each GPT model is trained on a large amount of data scraped from the internet (including e-books, news articles, blogs and social media posts), with each subsequent version bigger and more sophisticated than its predecessor. The predecessor to GPT-4 is GPT-3.5, which powered ChatGPT, the AI chatbot application that launched in November 2022 and quickly gained popularity due to its remarkable ability to answer questions in a conversational and human-like manner. In two months since its launch, ChatGPT gained over 100 million users, making it the fastest growing application in history. The launch of GPT-4 is the latest salvo in the battle to dominate the new field of generative AI, systems capable of producing text, images or other media in response to prompts or questions. GPT 3.5 was already powerful, securing much media coverage and debate about the social and economic role of AI. So how much more powerful is GPT-4?
Like its predecessors, GPT-4 is a highly versatile language model that can be applied to a wide range of natural language use cases across various industries. It can be easily integrated into other applications and systems via APIs (application programming interfaces – software that allows two systems to communicate). This provides organisations the ability to automate or streamline many text or image-related tasks. This level of customisation has empowered organisations to use GPT-4 for tasks such as summarising documents, generating chart data and even detecting visual anomalies in real-world infrastructure.
While powerful and versatile, GPT-4 remains subject to the same complex legal and ethical issues set out in our preceding article The Law and AI. However, these issues might be amplified in the face of GPT-4 because of the model’s ability to generate erroneous information and sometimes even provide false or fictional references to support such results. This process is termed a "hallucination", and it is possible hallucinated responses may end up being used inadvertently or for malicious purposes such as generating fake news or propaganda. One possible safeguard is to develop and implement guidelines and rules for the use of GPT-4 and other AI tools to help ensure they are used responsibly.
KWOK TANG PARTNER, SYDNEY View profile
JULIAN LINCOLN PARTNER, HEAD OF TMT & DIGITAL AUSTRALIA View profile
RAYMOND SUN SOLICITOR, SYDNEY View profile
ALEX LUNDIE SENIOR ASSOCIATE View profile
Peng Lei Partner, Kewei mainland china
vs
gpt-4
gpt-3.5
Available via ChatGPT+ (for paid users), via Bing Chat or via API
Available via the ChatGPT app or via API
Availability & pricing
Not confirmed but rumoured to have at least 100 trillion parameters
Has 175 billion parameters
Parameters
Has multimodal chat capabilities including processing image inputs and inferring classifications, captions and analyses
Can only accept text inputs
Chat capabilities
Can read, analyse or generate up to 25,000 words (around 8 times more than GPT-3.5)
Can process responses up to 3,000 words
Capacity
Summary of use cases
Generate creative campaigns, slogans, and even product descriptions from visual inputs
Content creation
Create test questions, summarise research, automatically mark student work, and tailor module content to a student's learning style
Education
Understand questions, read technical documentation and summarise solutions (which can be especially useful for visually impaired or low vision individuals)
Virtual assistance
Provide answers to questions around customer service and technical support
Customer serivce
Ask a question in one language and request an answer in another, lingual training and practice support
Language translation
Opportunities
Limitations
Longer memory due to maximum token count of 32,000 compared to GPT-3.5 token count of 4,000
Potentially safer being reportedly 82% less likely overall to respond to “disallowed” requests than GPT-3.5
Smarter performance due to larger quantity of parameters
Better at responding to more nuanced instructions
Lacks knowledge of events occurring after September 2021 cut-off of training data
Issues with transparency due to inability of model to reveal decision making processes
It’s still a language model. Cannot distinguish between fact and fiction
“Hallucinations” may generate false references
GPT-4
Summing up the implications of such potent platforms, Herbert Smith Freehills APAC head of Emerging Technology Susannah Wilkinson notes: “We are witnessing a staggering pace of change in the availability, capability and interoperability of generative AI tools. While we continue to explore the art of the possible, we need to stay alert to the limitations and identify when output may not be fit for purpose for a variety of reasons.”
SUSANNAH WIKINSON, APAC head of Emerging Technology
We are witnessing a staggering pace of change in the capability of generative AI tools. While we continue to explore the art of the possible, we need to stay alert to the limitations and identify when output may not be fit for purpose."
SUSANNAH WILKINSON, APAC head of Emerging Technology
While over half of respondents have some trust in AI – an encouraging figure for a nascent and widely-debated tool – only 20% have a high level of confidence and 22% are outright distrustful. The anxieties among the latter group are increasingly familiar. Firstly, 61% believe the technology is too new to be reliable, while 53% cite a perceived lack of accountability in AI systems as a major problem. Moreover, 37% worry AI could be biased against specific groups – a fear which strikes at the heart of machine learning and automated decision making. Such are the concerns, the issue increasingly arrests regulatory attention, notably among EU lawmakers, whose draft EU Artificial Intelligence Act stresses the technology should be developed to avoid repeating “historical patterns of discrimination”. Moreover, a lack of recourse should systems produce harmful outcomes is a prominent concern among the total respondent pool, with 47% fearing an inability to challenge the results produced by AI. Less than half (42%) feared automation would result in job losses, though the belief was more prevalent among those aged over 55 (49%). Only 5% of the 1,000 surveyed cited no concerns about the technology, and despite younger age groups being generally more sanguine about the prospects of AI, the figure only rises to 9% for those under 35.
Emerging technology has often met with alarmism throughout history. Futurists argue the moral panics of today echo the hysterias of yesteryear, such as those fearing that replacing slide rules with calculators would destroy our comprehension of maths. But such derision often misses a crucial point: because critics have sometimes been wrong in the past does not mean the cautious are foolish today; new technology has frequently brought profound ethical and legal dilemmas alongside huge benefits. Such dismissals also underplay the ramifications surrounding milestone advances. Not least of these is artificial intelligence (AI), which already heralds vast commercial and personal applications. For that reason, Herbert Smith Freehills (HSF) this spring conducted research among 1,000 consumers between the ages of 18 and 80 across different genders, income levels, regions and educational backgrounds. Respondents were asked what type of AI they use today, expectations for future use and how comfortable they are with the way machines gather data and operate. The results – conducted after last year's launch of the pioneering ChatGPT app triggered huge interest and intense debate on the advances in AI – tell a clear story: despite being already considered transformative, AI has not completely won consumer trust.
Our research reveals entrenched fears over artificial intelligence's impact but its transformative potential is already stark
As HSF's Emerging Technology Group regional head Alexander Amato-Cravero concludes: "What we're looking at is the use of law and regulation to clarify consumer protections and support innovation. Right now, to understand their rights and obligations, businesses and consumers must navigate a patchwork of legally binding rules, the gaps between which may - or may not - be plugged by standards and guidance that have no force of law. The sooner global policymakers simplify this landscape and plug the gaps with legally certain AI rules and guidance, the sooner we will see a step-change in the confidence businesses and consumers have in developing, deploying, and using AI systems".
"The results of the data demonstrates the UK public understands the major concerns of AI," says Singapore-based HSF lawyer Peggy Chow, whose practice focuses on cyber and data law. "While the use of AI in fraud prevention is considered positive, there are concerns around automated decision-making which impacts individuals, such as decisions about who wins or loses a legal dispute, advising on pensions or investment, credit-scoring, and health problem diagnosis." In terms of usage, the majority of respondents predominantly engaged first-generation AI tools over the last 12 months such as mapping guidance (74%), entertainment recommendations (64%), text editors/autocorrect (61%) and voice assistants (60%). But more advanced AI is rapidly gaining traction. With generative AI application ChatGPT sending waves through the technology world, 63% of respondents said they have used some form of chatbot. Unsurprisingly, uptake was most common in those aged under 35 (84%), while the figure drops to 68% for those aged 35-54 and 44% for respondents over 55. Chatbots were most commonly used for customer service (46%), virtual shopping assistance (37%), and to gather information (30%). However, despite only appearing from November 2022, nearly one third (29%) said they have already used AI applications like ChatGPT, Midjourney and Bard to generate content.
Believe the hype (and worry)
Our research reveals the complex set of attitudes consumers have about the technology with wider consumer confidence towards AI-powered tools varying significantly depending on where the technology is applied. While 66% of respondents are relaxed about a bank using the technology to spot criminal activity and 57% happy to receive personalised adverts, news or entertainment, only 29% would be comfortable with AI deciding a legal dispute. Unsurprisingly, in matters of health and finance, consumers are also more cautious, with only 43% having no issue with an AI diagnosing a health problem and 45% open to receiving advice on pensions and investments from a machine; but the numbers rise to 56% and 59% respectively among those under 35. The broad takeaway is clear: the more a group uses AI, the more they trust it. That trust is important in the development of AI. The technology is like no other emerging field in its perceived potency and impact. While there has been no shortage of doubters of cryptocurrencies and the metaverse in recent times, AI scepticism is of a different hue – some believe the technology could be harmful to our lives and society but very few doubt its power and potential. It is not, as Elon Musk famously described the metaverse, just "marketing". Just 4% of respondents said AI possesses no benefits, while only 7% believed it will make no tangible difference, positive or negative, to society. This compares to 75% who believed AI will lead to at least some societal improvements and 18% who believed it would make at least some things worse. Moreover, just 1% believed the technology will have no impact on our lives over the next 12 months and only 9% believe it incapable of replacing human intelligence. A majority (51%) believe AI will help people make faster and better decision over the next 12 months.
In response to these concerns, HSF is calling for three steps to help businesses and consumers build confidence:
Accelerate development of legally-binding AI rules. The sooner policymakers plug gaps in the current patchwork of rules covering AI, the sooner consumers and businesses can understand their rights and obligations. Increase alignment among domestic and global policymakers. Harmonising the approach of the multiple global authorities governing AI will help address gaps in protection and avoid conflicting regimes. Improve communication and educate individuals on the risks of AI. The key to long-term success is dialogue, not fanfare. Building confidence requires cutting through the noise with sharp focus on the risks as well as the opportunities.
Regional Head (UK,US & EMEA) of Emerging Technology Group Alexander Amato-Cravero (and his digital avatar) explain consumer concerns around AI.
Improve communication and educate individuals on the risks of AI. The key to long-term success is dialogue, not fanfare. Building confidence requires cutting through the noise with sharp focus on the risks as well as the opportunities.
Increase alignment among domestic and global policymakers. Harmonising the approach of the multiple global authorities governing AI will help address gaps in protection and avoid conflicting regimes.
• • •
believe it will make certain aspects of society worse
12%
believe it will improve certain aspects of society
54%
of all respondents believe AI is welcome and will improve the world
21%
Alexander Amato-Cravero, Regional Head of Emerging Technology Group (UK,US & EMEA)
The sooner global policymakers simplify the landscape and plug the gaps with legally certain AI rules and guidance, the sooner we will see a step-change in the confidence businesses and consumers have in using AI."
believe it can be biased against certain people or groups
37%
believe it is not accountable to anyone
53%
of those who don't trust AI believe the technology is too new
61%
Emerging technology has often met with alarmism throughout history. Futurists argue the moral panics of today echo the hysterias of yesteryear, such as those fearing that replacing slide rules with calculators would destroy our comprehension of maths. But such derision often misses a crucial point: because critics have sometimes been wrong in the past does not mean the cautious are foolish today; new technology has frequently brought profound ethical and legal dilemmas alongside huge benefits. Such dismissals also underplay the ramifications surrounding milestone advances. Not least of these is artificial intelligence (AI), which already heralds vast commercial and personal applications. For that reason, Herbert Smith Freehills (HSF) this spring conducted research among 1,000 consumers between the ages of 18 and 80 across different genders, income levels, regions and educational backgrounds. Respondents were asked what type of AI they use today, expectations for future use and how comfortable they are with the way machines gather data and operate. The results – conducted after last year's launch of the pioneering ChatGPT app triggered huge interest and intense debate on the advances in AI – tell a clear story: despite being already considered transformative, AI has not completely won consumer trust. While over half of respondents have some trust in AI – an encouraging figure for a nascent and widely-debated tool – only 20% have a high level of confidence and 22% are outright distrustful. The anxieties among the latter group are increasingly familiar. Firstly, 61% believe the technology is too new to be reliable, while 53% cite a perceived lack of accountability in AI systems as a major problem. Moreover, 37% worry AI could be biased against specific groups – a fear which strikes at the heart of machine learning and automated decision making. Such are the concerns, the issue increasingly arrests regulatory attention, notably among EU lawmakers, whose draft EU Artificial Intelligence Act stresses the technology should be developed to avoid repeating “historical patterns of discrimination”. Moreover, a lack of recourse should systems produce harmful outcomes is a prominent concern among the total respondent pool, with 47% fearing an inability to challenge the results produced by AI. Less than half (42%) feared automation would result in job losses, though the belief was more prevalent among those aged over 55 (49%). Only 5% of the 1,000 surveyed cited no concerns about the technology, and despite younger age groups being generally more sanguine about the prospects of AI, the figure only rises to 9% for those under 35.
of respondents do not trust AI with 10% having low levels of trust
22%
believe AI will make no tangible difference to society
7%
ALEX CRAVERO, Digital Law Lead (UK/US & EMEA)
Regional Emerging Technology Group lead Alex Cravero (and his digital avatar) explain consumer concerns around AI.