Even if the frequency or value of extortion payments begin to fall, it is generally accepted that the broader cost of cyber-crime will continue to rise. “Business-email-compromises and associated funds-redirection fraud are now commonplace and challenging many global businesses," says Whittfield.
The numbers involved tell their own story. According to the United Nations Capital Development Fund, the total direct cost of global cybercrime in 2020 was approximately $945 billion. However, when indirect costs such as brand disparagement, intellectual property infringement and lost opportunities are considered, the figure inflates to around $4 trillion. Moreover, research group Cybersecurity Ventures in 2020 estimated that overall damages associated with cyber-attacks will reach approximately $10.5 trillion by 2025.
In this context, regulatory scrutiny will inevitably increase. In August 2020, the Australian Securities & Investments Commission (ASIC) alleged that financial services company RI Advice Group Pty failed to properly manage its cyber risk, resulting in various data breaches between June 2014 and May 2020. In May 2022, the Federal Court of Australia handed down its judgment, confirming settlement details and finding that RI Advice failed to manage security risks and cyber resilience, breaching its Australian Financial Services Licence obligations. Whittfield warns businesses to take note: "ASIC has demonstrated an intention to take on cyber security cases and we expect the regulator to take a particular interest in the role directors are playing in ensuring a company is cyber resilient."
Moreover, regulatory attention is being monitored by parties looking to commence class action claims. In Australia, recent high-profile data breaches have resulted in multiple class action claims with the impacted organisations also the subject of investigations by the Office of the Australian Information Commissioner.
Meanwhile, UK regulators are becoming similarly focused. The Information Commissioner's Office has issued two so-called 'mega fines' following breaches in recent years, with Marriott receiving a £18.4 million penalty in 2020 following a 2014 cyber incident and British Airways being fined £20 million following a breach disclosed in 2018.
And there's no shortage of incidents for regulators to scrutinise. In the UK, Royal Mail was targeted in January by a Russian-linked ransomware attack using the now infamous LockBit malware, severely disrupting the postal service's ability to deliver globally. UK national newspaper The Guardian was also targeted after an attack in December 2022 which compromised the personal data of its staff. Elsewhere, fashion retailer JD Sports was hit by a cyber assault which potentially leaked the financial and personal information of 10 million customers who made orders between November 2018 and October 2020. As global threats worsen and attack vectors multiply, the list of corporate casualties lengthens.
READ MORE +
Cyber security and insurance – What you need to know
Forever war games
Indeed, insurance may become less available in the future, with some providers stepping back from ransom-related cover. Such is the concern, Zurich chief executive Mario Greco has warned state-sponsored cyber-attacks could become "uninsurable".
Herbert Smith Freehills (HSF) partner Cameron Whittfield, whose practice focuses on all aspects of cyber security and emerging technologies, sets out the terrain: "We should be prepared for a broad range of risks and while there remains a lot of focus on high-profile cyber extortion events, the risk landscape is materially broader. We are also seeing business impacted by increasing cyber-related fraud and nation state activity, particularly given the evolving geopolitical landscape. Regulators are also increasingly active and resourced, often looking to directly inculcate officers and directors of corporates."
In the fourth instalment of our TechQuake series on the digital tools that shape our world, we explore what is driving the rise in cyber risks and assess how boardrooms can enhance the resilience of their businesses in the face of this escalating threat.
Never mind that the cyber security industry has existed for decades, when it comes to business, not to mention the increasingly virtual ways we live, never has there been such an era of anxiety about the safety of our digital spaces. The spheres of high finance, defence, healthcare, critical infrastructure and now our personal lives all rely on online nervous systems susceptible to attack by malevolent forces. As more devices connect to complex networks – a trend accelerated by the Covid-19 pandemic – more vectors become available to cyber criminals, many of which have been emboldened by the autocracies that have increasingly blurred the lines between crime and war.
The consequences of these attacks can range from costly to catastrophic. But, crucially for boardrooms, they have also grown more frequent and severe. A business may face a significant ransomware attack, an air-traffic control system could be compromised, or the data of thousands of customers leaked.
As if that was not enough to arrest the attention of boardrooms, businesses are also contending with an increasingly complex regulatory landscape and litigious environment. Moreover, questions are now being raised about whether insurance can effectively cover such risks.
From ransomware to electronic warfare, the digital world has grown perilous in recent years. In the latest TechQuake chapter, we explore the risks facing boardrooms
It has been a common refrain among military tacticians since remarked by Prussian commander Helmuth von Moltke in 1880: no plan survives first contact with the enemy. But while the maxim holds true in matters of digital conflict and crime, it does not change the fact that a rigorously tested and flexible plan of action is far better than improvisation once cyber hostilities commence. Drawing up coherent plans, ensuring widespread awareness throughout the business, and wargaming those plans to maintain familiarity, socialise the questions the business will have to answer, and reveal unforeseen problems are the most effective measures businesses can take to protect themselves.
Cyber security also involves more than just tech – the human element is central, says Pitt-Payne: "The absolute basic thing is training your people and not losing sight of the human dimension alongside the technical dimension. You could patch against every possible security vulnerability out there and still have a huge incident because someone's credentials were stolen. There's an important human side."
TYPES, COMPONENTS AND APPLICATIONS +
GET HSF INSIGHTS DIRECT TO YOUR INBOX
War and terrorism exclusions have always been in cyber policies. New exclusions bring some clarity rather than the haphazard exclusions we've seen in the past."
"There are three main trends businesses are contending with," says Andrew Moir, who is global head of HSF’s cyber and data security practice. "An increasing legislative burden, a more contentious environment following cyber incidents, and an increased focus on individual liability. All companies are affected by these issues in one form or another."
The legislative angle is increasingly complex in jurisdictions across the globe. In the UK, for example, from the Computer Misuse Act in 1990 and the EU Data Protection Directive of 1995 through to the Security of Network & Information Systems Regulations in 2018, the UK's General Data Protection Regulation (GDPR) in 2021 and the upcoming EU AI Act, business is wrestling with a myriad of government interventions impacting digital security. In the US, the White House has published its National Cybersecurity Strategy, looking to improve cyber investments and risk allocation. The Australian Government is also developing the country’s cyber security strategy, foreshadowing material legislative reform.
"A lot more legislation comes through than is repealed, so you end up with an ever-increasing burden on businesses," adds Moir. "Plus, cyber security is a multinational issue, which means you have to deal with multiple regulatory regimes across the jurisdictions you operate in. And those regulatory regimes typically require different things. It becomes a minefield."
The attention of legislators is unlikely to wane, particularly as the frequency and complexity of cyber-attacks continues to escalate. High-profile cyber extortion attacks are evolving. They typically involve the use of malware to encrypt systems and the theft of data. Victims of the attack are then extorted for the decryption keys to unlock their systems or prevent data from being disclosed (a double extortion). In recent times, some threat actors have used data theft alone.
According to specialist data and research provider Chainalysis, estimated global payments identified as received by cyber extortion attackers more than quadrupled annually from $174 million in 2019 to $765 million by 2020. The hike has prompted some governments to consider intervention, with the Australian Minister for Home Affairs and Cyber Security, Clare O'Neil, flagging that the Australian Government is assessing a proposal to ban ransom payments. While logically a ban would remove the commercial incentive of ransomware attacks, the proposal remains complex, particularly if core assets or operations are impacted by ransomware.
Ensuring the technological and human pillars of cyber resilience are sufficiently resourced and trained requires complete board buy-in, according to Canbilen. "Boards need to realise that change is afoot and instead of delegating the load to the operational level, they need to take a proactive and engaged approach; better to be ahead of the curve. This means when an incident does happen – and one will happen at some point – the organisation is on the front foot and can expect a better outcome. If every board member across every board was curious and willing to make advised decisions themselves about cyber, then each nation's resilience would be much higher."
Companies have largely woken up to such imperatives, says Melbourne-based cyber security and digital law senior associate Phillip Magness: “I’ve seen an uplift in board focus over the last five years. Cyber resilience is increasingly a standing item on board agendas, and this has led to us briefing boards and conducting simulation exercises for boards and the executive. That focus has followed a recent rise in ransomware and data extortion attacks in Australia."
The ultimate reality is that cyber-crime, despite existing as a mainstream concern for 20 years, is an escalating risk factor for business. The best defence remains less about cutting-edge tech and more the unglamorous business of sound organisational process. Getting the basics right: educating staff, keeping software updated, backing up data, and maintaining sensible, actionable response policies. As much as the risks and technologies continue to morph in the cyber sphere, the basics of well-implemented but unflashy risk policies remain as relevant as ever.
DUCK AND COVER
Insurance markets typically excel at pricing risk – a reality which has seen some unusual policies taken out over the years. However, recent developments in the insurance sector are undermining confidence that the industry will step in to cover cyber risks. Last year, Lloyd's of London announced cyber policies will have an exemption for attacks by state-backed actors. Initially, this may seem unsurprising – acts of war have long been excluded by insurance policies. However, it is an unsettling qualification when the question of whether a threat actor is state-sponsored is increasingly unclear.
Moreover, legal clashes over whether an attack has been supported by a country seem inevitable. For example, in 2022 pharmaceutical group Merck succeeded in a US court claim that a war exclusion should not be applied to an assessment of its $1.4 billion loss suffered from a 2017 malware attack known as NotPetya. The distinction between digital war and digital crime has never been blurrier, and insurers are hoping to provide clarity in what is still a relatively young market.
"There is a suite of exclusions issued by the Lloyd's Market Association," notes Greig Anderson, an HSF disputes partner who focuses on insurance and professional negligence. "When Lloyd's says it's mandatory to have effective nation state exclusions in policies, they mean these LMA exclusions will do the trick. War and terrorism exclusions have always been in cyber policies. In fact, new exclusions bring some clarity rather than the haphazard exclusions we've seen in the past."
For now, businesses are facing intense pressure to demonstrate to insurers they have a robust and workable strategy to defend against cyber incidents. Cyber insurance is generally only available to companies that can demonstrate an acceptable level of resilience. Cyber policies are designed to provide more comprehensive cover than other, more general insurance policies, but the evolving state of the product means that cyber cover does not come in standard form. Furthermore, the risks are driving premiums upwards, as too are significant losses incurred by cyber insurance providers early in the policy lifecycle.
TIMOTHY PITT-PAYNE KC,
The absolute basic thing is training your people and not losing sight of the human dimension. You could patch every possible security vulnerability and still have a huge incident because someone's credentials were stolen."
We are seeing business impacted by increasing cyber-related fraud and nation state activity, given the evolving geopolitical landscape. While there remains a lot of focus on high-profile cyber extortion, the risk landscape is materially broader."
Cyber-crime and cyber-war – A question of semantics?
Moir emphasises the risk for business: "Regulators might take a particular interest in a data breach if they believe there were shortfalls in the company’s cyber security. This may be weaponised by plaintiff firms, who leverage the findings of regulators to bring claims on behalf of those affected."
While all this should be enough to concentrate the minds of boardrooms, there's another point of vulnerability: a renewed focus on individual director liability. This is primarily playing out in the US, but advisers warn it may be a sign of things to come globally. Those operating in the UK financial services sector are already subject to the Senior Managers and Certification Regime. "People will be pointing the finger at individuals who were at the helm either in dealing with the cyber incident or failing to protect the company from it," adds Moir.
Phishing, where individuals are duped into disclosing their user credentials, is still rampant. 2022 saw the highest rate of mobile phishing attacks on record, according to software firm Lookout. In the UK, government figures show 83% of the businesses that reported a cyber-attack in 2022 identified phishing attempts as the means of attack.
The fear is that these methods are becoming more effective due to developments in artificial intelligence (AI). A notable example is OpenAI's chatbot, ChatGPT, which was recently exploited by cyber criminals to generate malicious content such as phishing emails and malware. Of course, AI's potential cuts both ways: the technology can be deployed in cyber defence as well as cyber-crime, but concerns are growing.
Meanwhile, points of vulnerability can also lie outside your own organisation, says Coca-Cola European Partners General Counsel and Company Secretary Clare Wardle: "We've seen our suppliers hit by cyber-attacks. The conversation is no longer just about an attack targeting us – though that's still a huge priority and we're training everyone to be vigilant. It's now also about making sure we know what is happening with our suppliers and getting information there. It means if someone is hit in our supply network, we know about it as soon as they do."
Boardrooms may have once seen cyber security as something to be delegated, but change is underway given the nature and complexity of the risk in a shifting regulatory landscape. Worse still, the legal risks are not mutually exclusive: a cyber-attack could trigger a regulatory probe, a class action and individual director liability.
A driver of insurer exposure has been the increasingly contentious environment surrounding cyber incidents, observes 11KBW barrister Timothy Pitt-Payne KC: "There are huge problems in pricing the risk and that is partly because when you look at risks from civil claims, there's still a lot of uncertainty as to what individual claims are worth when they succeed. From an insurer point of view, you want to know your maximum exposure and if the data of a million people is lost, and they all have a claim, what is that worth per capita? It's also a question lawyers find difficult to answer at the moment."
Given these challenges, prevention remains the best solution. This means having at least the basic security protections in place (such as Multi-Factor Authentication), IT and legal teams correctly resourced, knowing what back-up plans are in place, what procedures exist and stress-testing those responses so practical problems are resolved before it is too late. Quickly mobilising advisers is also integral, notes HSF cyber and data security lawyer Ridvan Canbilen: "It's been unfortunate coming into situations where organisations have left it days before reaching out for help because of initially not knowing who to ask. Having a well-drilled organisation which knows the response plan and who to speak to internally and externally is vital."
The conversation is no longer just about an attack targeting us. It's now also about making sure we know what is happening with our suppliers."
Coca-Cola European Partners General Counsel and Company Secretary
EMERGING TECH ACADEMY
The line between criminal attack and state-backed hostility is increasingly blurred
PABLO GARCÍA MEXÍA,
HEAD OF DIGITAL LAW - MADRID
Virtual worlds will be subject to many of the same issues we see in today's internet. The problem is the metaverse involves emerging technologies that are still being grappled with on a global scale by businesses and politicians and there will only be more of that to come as we see more advanced tech move into mainstream".
Immersive experiences create more compelling narratives. Greater community engagement and gamification in novel contexts will also generate new business opportunities."
Digital Law Lead (Australia & Asia)
It remains to be seen if the current global decoupling along economic and geopolitical lines will prove temporary, but it has certainly been fraught, in both the physical and cyber worlds. And business has taken note.
"Business is increasingly aware of catastrophic risk," says 11KBW barrister Timothy Pitt-Payne KC. "State agents are weaponising cyber threats. For example, North Korea steals cryptocurrency, which is an important source of funding for them. There are big risks where it is the responsibility of government to keep us safe. But there's also an increasing awareness of state-sponsored actors."
Definitions are hazy. Where a cyber-criminal has been tolerated by a state or actively encouraged is hard to determine. However, it is clear that growing global tensions have resulted in a surge in cyber-crime, particularly ransomware.
Royal Mail is among the growing list of victims, after the UK postal service was attacked by the prolific hacker group LockBit. The attack followed a now familiar pattern: the group stole and encrypted data before demanding a ransom payment. While LockBit's creators claim the group is "apolitical", they are widely believed to be based in Russia. Whether the Kremlin actively supports the organisation or simply turns a blind eye is unclear, but some would not draw the distinction.
Meanwhile, in America there have been fears of vulnerabilities being exposed since the 2020 SolarWinds attack, which saw Russian hackers gain access to the US commerce and Treasury departments, among other government agencies. It is a problem which has been inflamed again more recently after cyber-criminals attacked the Colonial Pipeline Company, resulting in the pipeline supplying almost half the oil to America’s east coast being shut down for five days. To get it flowing again, they demanded an almost $5 million ransom payment from the company, and links to Russia-based hackers have been drawn.
Unfortunately, as global tensions persist, autocratic regimes will provide sanctuary to criminals, and in some cases deliberately embolden them. And it is clear why – as computers are built into vehicles, homes and factories, the possible vectors of attack only increase. In an era where criminals can swap balaclavas for software, and war is as digital as physical, businesses need to reimagine the risks.
With insurers narrowing their coverage for cyber-incidents, HSF partner Greig Anderson discusses the market tensions and how businesses can protect themselves
Presumably whether an attack has been state-sponsored will be a matter of contention?
George McCubbin SENIOR ASSOCIATE (AUSTRALIA), LONDON
If brands are thinking about entering the metaverse (as many already have), or promoting and selling NFTs, the terms under which any proprietary content is licensed or sold need to be carefully considered"
Giulia Maienza ASSOCIATE, MILAN
Rachel Montagnon Professional support consultant, London
GLOBAL HEAD GLOBAL HEAD OF CYBER & DATA SECURITY, LONDON
Why has cyber insurance become more difficult for businesses to secure and afford in recent years?
The insurance market has got tighter on including or excluding cyber in policies, whereas historically you could claim as the policy didn't say you couldn't. That's because the [UK] Prudential Regulatory Authority lead on the insurers has said: "You guys don't know what your exposure is, you need to know given the worst-case scenario. You need to cover it or say it's excluded." In respect of pure cyber insurance, it's just been priced wrongly. This was because the insurers had not anticipated the level of losses they'd be incurring on ransomware, in particular. They weren't making money on it. The whole insurance market changed as cyber got more expensive.
What advice would you give a business struggling to secure a policy or facing prohibitively high premiums?
On the question of attribution, that being whether an attack can be attributed to a state-backed actor, those LMA exclusions do deal with it. Now, they don't deal with it as well as I'd like. Their position is it depends on what the state where the asset is held says – but often you won't have a government helping you out and telling you the answer regarding attribution. In those circumstances, insurers are allowed to draw their own reasonable inferences as to where attribution should lie. That matters because these exclusions aren't blanket on nation state attacks, they're hedged. They wouldn't exclude every attack perpetrated by a Russian-sponsored group, for example. It's important to not over-generalise these exclusions.
Will the market continue to harden?
Prices are going up, but now slowly in cyber, whereas they were rising exponentially. It's beginning to plateau. Just because you couldn't buy a policy last year, doesn't mean you can't this year, and definitely doesn't mean you can't next year. Making sure you're going to a respectable broker who can answer these questions and help you is a good starting point. Now might not be the right time, but you can try again next year.
One is ensuring you have top levels of cyber hygiene as underwriters are only willing to underwrite good risks. You need to understand from them what they consider to be a good risk. So, you need good IT security, but also to be able to communicate that to them. You may also want to consider if you can get cyber attached to another policy, on crime or personal indemnity. You can speak to a broker about buying that coverage as part of another package, but it'll vary from business to business.
It is worthwhile spending some time with your insurers before the event and making sure you know what coverage you've bought. Often this stuff gets put in a cupboard and forgotten about. I'll be with a client, and they'll say they've had a cyber-attack. I'll ask if they're covered and they say, "I don't know." Eventually they come back, two hours into the incident, and say "here's the policy, what does it cover?" What's the lesson? Do your work beforehand so you know what your coverage is, who to telephone, and then you're in a better position to realise that cover and get money out of insurers.
It's beginning to plateau. Just because you couldn't buy a policy last year, doesn't mean you can't this year, and definitely doesn't mean you can't next year. Going to a respectable broker who can answer these questions is a good starting point."
Creative differences – Intellectual property law and AI
While far from settled, IP rights play a critical role in policing the use of artificial intelligence and protecting the rights of inventors and creators
Cage against the machine – The law and AI
The black box nature of AI presents practical challenges for IP owners. It may be more difficult for a copyright owner to establish an AI made unauthorised use of their work, or a patent owner to establish an AI implements a method claimed by their patent.
The enormous investment being made into artificial intelligence (AI) and machine learning (ML) means that investors expect protectable returns. Equally, inventors and creators want to be able to control how the products of their efforts are used by others. These safeguards are generally provided by the intellectual property (IP) system. However, the unique nature of AI means the legal system may need to adapt to provide the protection the key players expect.
From the use of copyright-protected material to train AI systems, to concerns about who is liable when AI systems copy others’ work or infringe a patent, jurisdictions worldwide are grappling with the balance to be struck between the benefits AI can bring and the rights of inventors, content creators and consumers.
At the heart of many AI and ML platforms are large quantities of material from which the system learns – for example, text from articles, websites, books and academic papers in the case of OpenAI’s ChatGPT; computer code in the case of GitHub Copilot; or artworks in the case of DALL-E or Midjourney. However, those materials are themselves products of human intellectual effort and often protected by IP rights, such as copyright.
Use of copyright-protected material without permission may constitute infringement. Whether use of that material in the AI's development is lawful – including digital reproductions used solely within the AI system – depends on copyright infringement defences that differ between jurisdictions. For example, defences such as fair use in the US, incidental temporary reproductions (Australia) or making of transient or incidental temporary copies (UK or EU) may apply. Such defences can depend on whether the ultimate use – such as the AI's output – is lawful. In the US and UK, rights owners (artists and Getty Images, respectively) have recently launched legal proceedings against Stability AI and others, alleging the use of their artworks to train the AI system constituted copyright infringement.
The UK Government recently consulted on whether exceptions to copyright (and database right) infringement for text and data mining should be introduced. The government initially concluded that it would introduce a new copyright and database exception to allow text and data mining "for any purpose", although rights holders would “still have safeguards to protect their content, including a requirement for lawful access". Such an exception was aimed at freeing legally available material for the use of AI systems, rather than a free-for-all approach, but met with criticism and has now been dropped. Identifying those liable for an AI's use of materials that are not legally available would have been key to ensuring these provisions are effective and did not allow illegitimate use by the back door.
The black box nature of AI systems also presents practical hurdles for IP owners seeking to protect their rights. The fact that it is difficult, or impossible, to know what a computer system does with material it is provided for learning means it may be more challenging, for example, for a copyright owner to establish an AI system made unauthorised use of their work, or a patent owner to establish an AI implements a method claimed by their patent.
Apart from use by AI, IP rights can be infringed by outputs produced from computer systems. A recent class action filed against Microsoft in relation to GitHub Copilot alleged, in addition to copyright infringement by the use of code from GitHub to train the system, that asking Copilot to create code achieving particular functionalities would substantially reproduce code Copilot had been trained on.
Moreover, even if materials used to train AI are open source, they can be subject to “copyleft” licences, which require that any derivative works are in turn licensed on terms that are no less restrictive. It remains unclear how such licence terms apply to material used to train AI that, in turn, generates a work, rather than material that is directly incorporated into a derivative work.
AI systems may also perpetrate or enable patent infringement although, as noted above, the black box nature of AI may cause difficulties for IP owners seeking to enforce their rights. Questions of individual liability also arise, in that any potential liability may be shared or apportioned between developers, users and the AI system itself. Determining who should be liable for infringement enabled by a computer may be difficult, given the challenge in determining the degree of user or developer input into any output or action produced by AI.
The use of AI in the consumer space raises additional issues. Targeted advertising is a prominent tactic employed across the consumer sector that utilises such technologies, and with the advent of virtual and augmented reality and the metaverse, companies have filed patent applications relating to the collection of biometric data for similar purposes. These applications of AI raise issues under privacy and consumer laws that govern how these products are marketed to consumers, how data profiling activities can be lawfully carried out and the resulting data stored, as well as any eventual uses of that data.
AI’s use of human intellect also raises ethical questions about the value placed on intellectual endeavours. Indeed, the Australian Government recently announced a review of Australia’s copyright regime, citing the importance of royalties to supporting the arts. However, if an AI system, trained on masses of art from other humans, can produce ‘original’ artworks, this raises the question: what role is left for original human expression?
IP protection encourages investment in creative and inventive outputs. It remains unclear, however, how those safeguards apply when an AI system is used either as part of that process of creation, or indeed whether an AI can be considered the creator or inventor itself.
Following attempts by Dr Stephen Thaler to have the AI system known as DABUS registered as the inventor of a patent, courts in the US, Australia, the UK and the European Patent Office each concluded an AI cannot be an inventor for the purposes of patent law as currently drafted. The law as it stands requires a human to be the inventor. However, both the UK and Australian courts emphasised that this was not the same question as whether an invention made using, or by, an AI system could be patentable. In doing so, the question has been raised – but yet not answered – how the increasing use of AI might affect questions such as determining whether a patent for an invention involves an “inventive step”, which is required for it to be valid. Inventiveness in patent law is judged in terms of the step not being "obvious" to the person skilled in the area of technology into which the invention falls. But what is not obvious to a human may not be the same to an AI system (if an AI is able to appreciate the nature of obviousness at all).
Where AI is used to generate creative outputs, like literary works, computer code or art, copyright would be the usual domain for legal protection. However, in Australia, the UK and Europe, the courts have emphasised the essential nature of human creativity, both as to whether copyright subsists at all and on scope of protection it affords.
In Australia, the EU and UK, a work must be “original” to attract copyright protection. Although there is no provision requiring the creator be human, the requirement that a work be "original" has been understood as requiring a human’s expression of their free and creative choices. This has led to calls for a specific regime to protect the creative outputs of AI systems. In the UK, there is provision that the owner of copyright in a "computer-generated work" is the person by whom the arrangements necessary for the creation of the work were undertaken.
These questions are the subject of government attention worldwide and may lead to further legislative developments. The recent UK Government consultation on AI considered whether there was a need to change domestic IP laws to accommodate computer-devised inventions and creations. Based on submissions received, the government concluded there was no need to change the law at this time. In relation to the copyright protection already provided for computer-generated works, as set out above, UK law already deems these to be authored, and therefore owned, by the person who makes the arrangements necessary for the creation of the work. For computer-devised inventions, most respondents felt AI is not yet advanced enough to invent without human intervention, but could assist the human inventor and, as such, inventions developed using AI are protected by the current law. More flexibility for text and data mining was proposed, however, and the government confirmed that the area of law concerning the protection and policing of AI via IP rights will be kept under review.
In Australia, the High Court recently emphasised the importance of precise characterisation of the invention claimed in determining whether it is patentable. Proposals for a test requiring a computer-implemented invention to be an “advance in computer technology” have been met with criticism, characterising it as a restrictive approach with a chilling effect on innovation, as was said to have resulted from US Supreme Court jurisprudence on this issue.
In the UK and EU (and at the European Patent Office) the assessment rests on the technical advance produced by a computer programme – if one can be identified, then the invention may be patentable.
The approach under Chinese law is similar, in that AI systems are likely only patentable to the extent that they address a technical problem and obtain a technical effect, either in the processing of data or improving the performance of a computer system in a specific field.
As a result, there remains uncertainty in assessing the boundaries of what constitutes patentable computer-implemented inventions, which is likely to persist until further guidance is provided by the courts.
Copyright and trade secrets can also provide protection for the source code and technical elements of an AI system, although both have particular requirements for demonstrating subsistence and proving infringement. Database rights can also be helpful in the EU and UK.
The considerable investment into the field will result in continuing pressure from those developing AI for suitable protection of that investment. As it has through the history of human advancement, IP worldwide will likely develop to meet this demand, both through the courts and legislation. The stakes are high for both sides.
For more on the developing area of IP protection and risks for AI and ML systems, follow our blog series The IP in AI.
Given the plethora of applications to different facets of society, there is an obvious benefit in encouraging investment in new and improved AI systems. IP systems worldwide have evolved to support funding of useful inventions and creativity by the provision of rights to own and protect that investment. Different methods and means of protection have been introduced to support different types of creativity and invention – hence the multiple types of IP rights in existence. Governments are considering how to support this new area of economic development – for example, the UK Government has a National AI Strategy which sets out its aim to "secure the UK’s position amongst the global AI superpowers". The EU, meanwhile, has been progressing its plans for regulation as set out in its proposal for an EU Artificial Intelligence Act, which attempts to limit the risks of AI and which some stakeholders are concerned will choke off investment in the field. However, despite the European Parliament having earlier adopted a resolution on IP rights for the development of AI technologies that acknowledged their importance, the proposals for the Act do not, so far, include proposals on AI and IP.
Professional support consultant
Patent attorney, kewei
A recent class action filed against Microsoft in relation to GitHub Copilot alleged that asking Copilot to create code achieving particular functionalities would substantially reproduce code Copilot had been trained on.
There remains uncertainty in assessing the boundaries of what constitutes patentable computer-implemented inventions, which is likely to persist until further guidance is provided by the courts.
Believe the hype?
The latest upgrade to the much-touted generative AI model is primed to attract more headlines and investment
APAC head of Emerging Technology
The launch of GPT-4 is the latest salvo in the battle to dominate the new field of generative AI."
On 14 March 2023, the Microsoft-backed company OpenAI released the latest version of its language models, GPT-4. The release was highly anticipated following the momentum generated by OpenAI's flagship chatbot application, ChatGPT, which became the fastest growing app to date and spurred a boom in the field of generative artificial intelligence (AI). But what is GPT? How does GPT-4 compare to its predecessors? And what are the opportunities and risks with this new technology?
Standing for Generative Pre-Trained Transformer, GPT refers to a series of large language models developed by OpenAI that can perform various natural language processing tasks (eg, proposing suggested answers to questions, summarising text, and generating lines of code). Each GPT model is trained on a large amount of data scraped from the internet (including e-books, news articles, blogs and social media posts), with each subsequent version bigger and more sophisticated than its predecessor.
The predecessor to GPT-4 is GPT-3.5, which powered ChatGPT, the AI chatbot application that launched in November 2022 and quickly gained popularity due to its remarkable ability to answer questions in a conversational and human-like manner. In two months since its launch, ChatGPT gained over 100 million users, making it the fastest growing application in history.
The launch of GPT-4 is the latest salvo in the battle to dominate the new field of generative AI, systems capable of producing text, images or other media in response to prompts or questions. GPT 3.5 was already powerful, securing much media coverage and debate about the social and economic role of AI. So how much more powerful is GPT-4?
Like its predecessors, GPT-4 is a highly versatile language model that can be applied to a wide range of natural language use cases across various industries. It can be easily integrated into other applications and systems via APIs (application programming interfaces – software that allows two systems to communicate). This provides organisations the ability to automate or streamline many text or image-related tasks. This level of customisation has empowered organisations to use GPT-4 for tasks such as summarising documents, generating chart data and even detecting visual anomalies in real-world infrastructure.
While powerful and versatile, GPT-4 remains subject to the same complex legal and ethical issues set out in our preceding article The Law and AI. However, these issues might be amplified in the face of GPT-4 because of the model’s ability to generate erroneous information and sometimes even provide false or fictional references to support such results. This process is termed a "hallucination", and it is possible hallucinated responses may end up being used inadvertently or for malicious purposes such as generating fake news or propaganda. One possible safeguard is to develop and implement guidelines and rules for the use of GPT-4 and other AI tools to help ensure they are used responsibly.
PARTNER, HEAD OF TMT & DIGITAL AUSTRALIA
Available via ChatGPT+ (for paid users), via Bing Chat or via API
Available via the ChatGPT app or via API
Availability & pricing
Not confirmed but rumoured to have at least 100 trillion parameters
Has 175 billion parameters
Has multimodal chat capabilities including processing image inputs and inferring classifications, captions and analyses
Can only accept text inputs
Can read, analyse or generate up to 25,000 words (around 8 times more than GPT-3.5)
Can process responses up to 3,000 words
Summary of use cases
Generate creative campaigns, slogans, and even product descriptions from visual inputs
Create test questions, summarise research, automatically mark student work, and tailor module content to a student's learning style
Understand questions, read technical documentation and summarise solutions (which can be especially useful for visually impaired or low vision individuals)
Provide answers to questions around customer service and technical support
Ask a question in one language and request an answer in another, lingual training and practice support
Longer memory due to maximum token count of 32,000 compared to GPT-3.5 token count of 4,000
Potentially safer being reportedly 82% less likely overall to respond to “disallowed” requests than GPT-3.5
Smarter performance due to larger quantity of parameters
Better at responding to more nuanced instructions
Lacks knowledge of events occurring after September 2021 cut-off of training data
Issues with transparency due to inability of model to reveal decision making processes
It’s still a language model. Cannot distinguish between fact and fiction
“Hallucinations” may generate false references
Summing up the implications of such potent platforms, Herbert Smith Freehills APAC head of Emerging Technology Susannah Wilkinson notes: “We are witnessing a staggering pace of change in the availability, capability and interoperability of generative AI tools. While we continue to explore the art of the possible, we need to stay alert to the limitations and identify when output may not be fit for purpose for a variety of reasons.”
APAC head of Emerging Technology
We are witnessing a staggering pace of change in the capability of generative AI tools. While we continue to explore the art of the possible, we need to stay alert to the limitations and identify when output may not be fit for purpose."
APAC head of Emerging Technology