Critical questions still need to be addressed about the use of generative artificial intelligence (AI), so businesses and consumers keen to explore the technology must be mindful of potential risks.
As it’s currently still in its experimentation stage, businesses will have to figure out the potential implications of tapping generative AI, says Alex Toh, local principal for Baker McKenzie Wong & Leow’s IP and technology practice.
Key questions should be asked about whether such explorations continue to be safe, both legally and in terms of security, says Toh, who is a Certified Information Privacy Professional by the International Association of Privacy Professionals. He also is a certified AI Ethics and Governance Professional by the Singapore Computer Society.
Amid the increased interest in generative AI, the tech lawyer has been fielding frequent questions from clients about copyright implications and policies they may need to implement should they use such tools.
One key area of concern, which is also heavily debated in other jurisdictions, including the US, EU and UK, is the legitimacy of taking and using data available online to train AI models. Another area of debate is whether creative works generated by AI models, such as poetry and painting, are protected by copyright, he tells ZDNET.
There are risks of trademark and copyright infringement if generative AI models create images that are similar to existing work, particularly when they are instructed to replicate someone else’s artwork.
Toh says organizations want to know the considerations they need to take into account if they explore the use of generative AI, or even AI in general, so the deployment and use of such tools do not lead to legal liabilities and related business risks.
He says organizations are putting in place policies, processes, and governance measures to reduce risks they may encounter. One client, for instance, asked about liabilities their company could face if a generative AI-powered product it offered malfunctioned.
Toh says companies that decide to use tools such as ChatGPT to support customer service via an automated chatbot, for example, will have to assess its ability to provide answers the public wants.
The lawyer suggests businesses should carry out a risk analysis to identify the potential risks and assess whether these can be managed. Humans should be tasked to make decisions before an action is taken and only left out of the loop if the organization determines the technology is mature enough and the associated risks of its use are low.
Such assessments should include the use of prompts, which is a key factor in generative AI. Toh notes that similar questions can be framed differently by different users. He says businesses risk tarnishing their brand should a chatbot system decide to respond correspondingly to an aggressive customer.
Countries, such as Singapore, have put out frameworks to guide businesses across any sector in their AI adoption, with the main objective of creating a trustworthy ecosystem, Toh says. He adds that these frameworks should include principles that organizations can easily adopt.
In a recent written parliamentary reply on AI regulatory frameworks, Singapore’s Ministry of Communications and Information pointed to the need for “responsible” development and deployment. It said this approach would ensure a trusted and safe environment within which AI benefits can be reaped.
The ministry said it rolled out several tools to drive this approach, including a testing toolkit known as AI Verify to assess the responsible deployment of AI and the Model AI Governance Framework, which covers key ethical and governance issues in the deployment of AI applications. The ministry said organizations such as DBS Bank, Microsoft, HSBC, and Visa have adopted the governance framework.
The Personal Data Protection Commission, which oversees Singapore’s Personal Data Protection Act, is also working on advisory guidelines for the use of personal data in AI systems. These guidelines will be released under the Act within the year, according to the ministry.
It will also continue to monitor AI developments and review the country’s regulatory approach, as well as its effectiveness to “uphold trust and safety”.
Mind your own AI use
For now, while the landscape continues to evolve, both individuals and businesses should be mindful of the use of AI tools.
Organizations will need adequate processes in place to mitigate the risks, while the general public should better understand the technology and gain familiarity with it. Every new technology has its own nuances, Toh says.
Baker & McKenzie does not allow the use of ChatGPT on its network due to concerns about client confidentiality. While personally identifiable information (PII) can be scrapped before the data is fed to an AI training model, there still are questions about whether the underlying case details used in a machine-learning or generative AI platform can be queried and extracted. These uncertainties meant prohibiting its use was necessary to safeguard sensitive data.
The law firm, however, is keen to explore the general use of AI to better support its lawyers’ work. An AI learning unit within the firm is working on research into potential initiatives and how AI can be applied within the workforce, Toh says.
Asked how consumers should ensure their data is safe with businesses as AI adoption grows, he says there is usually legal recourse in cases of infringement, but notes that it’s more important that individuals focus on how they curate their digital engagement.
Consumers should choose trusted brands that invest in being responsible for their customer data and its use in AI deployments. Pointing to Singapore’s AI framework, Toh says that its core principles revolve around transparency and explainability, which are critical to establishing consumer trust in the products they use.
The public’s ability to manage their own risks will probably be essential, especially as laws struggle to catch up with the pace of technology.
AI, for instance, is accelerating at “warp speed” without proper regulation, notes Cyrus Vance Jr., a partner at Baker McKenzie’s North America litigation and government enforcement practice, as well as global investigations, compliance, and ethics practice. He highlights the need for public safety to move along with the development of the technology.
“We didn’t regulate tech in the 1990s and [we’re] still not regulating today,” Vance says, citing ChatGPT and AI as the latest examples.
The increased interest in ChatGPT has triggered tensions in the EU and UK, particularly from a privacy perspective, says Paul Glass, Baker & McKenzie’s head of cybersecurity in the UK and part of the law firm’s data protection team.
The EU and UK are debating currently how the technology should be regulated, whether new laws are needed or if existing ones should be expanded, Glass says.
He also points to other associated risks, including copyright infringements and cyber risks, where ChatGPT has already been used to create malware.
Countries, such as China and the US, are also assessing and seeking public feedback on legislation governing the use of AI. The Chinese government last month released a new draft regulation that it said was necessary to ensure the safe development of generative AI technologies, including ChatGPT.
Just this week, Geoffrey Hinton — often called the ‘Godfather of AI’ — said he left his role at Google so he could discuss more freely the risks of the technology he himself helped to develop. Hinton had designed machine-learning algorithms and contributed to neural network research.
Elaborating on his concerns about AI, Hinton told BBC: “Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”