Responsible AI: embracing generative artificial intelligence technologies- a brief guide for organisations

30 May 2023
by Ridwaan Boda and Alexander Powell

With the boom of ChatGPT, AI frenzy, and similar technologies (“generative AI“) companies are questioning whether to allow their staff to utilise Generative AI for company purposes and if so, how to regulate such usage in order to minimise legal risk, especially in the absence of regulation in most jurisdictions.

Whilst some companies have adopted the view that they will not allow staff to utilise generative AI (which in itself is risky), for companies that are looking to leverage the benefits of generative AI in a responsible manner, we would recommend that such companies institute a number of measures. Before we expand on what measures companies should seek to adopt, we start with a short discussion on the risks around the usage of generative AI.

Generative AI technologies risks

ChatGPT and other generative AI technologies expose companies to a myriad of risks including:

  • Corporate governance and accountability – In South Africa, King IV places an imperative on company boards to ensure sound information governance and sound data governance. The adoption of generative AI technologies should therefore be lead by the board as opposed to allowing employees usage to be unchecked. This also ties in with the condition of accountability under the Protection of Personal Information Act, 2013;
  • Confidentiality – as the majority of AI technologies are owned by third parties, a company risks its employees disclosing confidential information or trade secrets to unauthorised parties;
  • Cybersecurity – if any confidential information is disclosed, it will be stored in third party databases. If a hacker breaches the database, there is a risk that the company’s sensitive information could be unlawfully accessed;
  • Data privacy and protection – a company should list the categories of confidential or sensitive information which employees may not upload or use when accessing AI technologies;
  • Intellectual Property – a company should clearly indicate its ownership over its data and outputs generated by Generative AI technologies (provided that it is the company and not a third party is entitled to ownership over such outputs);
  • Regulatory compliance – a company needs to ensure that its storage and processing of data or personal information is in accordance with applicable laws;
  • Liability – the use of AI could give rise to claims from a number of sources including clients, users, third parties, and even regulators;
  • Contracting – as companies would inevitably rely on third party service providers to provide skills as well as tools, companies should ensure that agreements with such service providers do not include any exclusion of liability and/or restrictive liability provisions;
  • Data bias and discrimination – any biases in the company’s data could to an AI reinforcing stereotypes, discriminating against people, or creating exclusionary norms;
  • Outdated or inaccurate information and misinformation – a company relying on generative AI runs the risk that the information used by the AI might be outdated or inaccurate which could lead to incorrect responses being generated. Companies also face the risk of their AI being a target of misinformation campaigns; and
  • Unqualified advice – if employees use generative AI to generate advice and provide such advice to clients without review, it could lead to a situation where advice has been given by unauthorised entity.

What if my organisation does nothing?

If a company’s stance on AI in the workplace is prohibitive or silent, it could lead to a situation of shadow IT. Shadow IT, is an organisational challenge, where employees adopt a technology which is not implemented or deployed by the company. If companies ban ChatGPT and other generative AI technologies, employees could resort to secretively utilising such technologies. This would further compound the company’s risk as it would not be able to regulate or even monitor employee usage of AI. It is recommended that companies adopt a policy to address and regulate AI use in the workplace in order to mitigate some of the abovementioned issues.

What interventions can a company institute?

Whilst there is not a once-size-fits-all approach when it comes to the type of interventions to be instituted, as this would be largely dependent on the scope of usage of generative AI in the company’s operations, in the absence of interventions, some of our suggestions include:

  • Governance: the board needs to ensure that proper structures are put in place as well as safeguards employed in order to ensure the adoption of Responsible AI. These may include establishing Centres of Excellence, dedicated task teams, and or other structures whose focus is ensuring that AI is adopted in a Responsible manner in keeping with the values and culture of the company and also in order to mitigate legal, technical and financial risk;
  • Policy implementation: a sound policy for the adoption of Responsible AI needs to be implemented. These would include not only mechanisms to mitigate legal, technical and financial risk but also ensure that ethical boundaries have been established based on the company’s own value system;
  • Training: companies should ensure that staff are trained at various levels and that training be adapted depending on what role staff members undertake as part of the company’s AI initiatives. Example: (i) legal and technical teams should undergo training on more than just the legal and technical risk of AI adoption but also on AI ethics and financial risks; and (ii) board of directors need to be trained on both ethical and legal considerations in order to establish a culture of Responsible AI;
  • Contracting – as companies would rely on third party service providers in order to deploy AI solutions, companies should ensure that they establish sound contracting standards in order to mitigate against the risk of a supplier providing tools and/or solutions which may give rise to claims and such supplier not being liable due to restrictive liability provisions. Further, the usual due diligence in supplier selection needs to also be adopted;
  • Ethical impact assessments: Although not mandatory, it is a useful tool to ensure that any projects undertaken or AI being adopted complies with the company’s policies and applicable laws;
  • Ethical reviews: as part of this, companies may wish to establish a distinct AI ethics review board, which would also engage in the approval of projects based on ethical impact assessments undertaken.
  • Pioneering industry initiatives or codes of conduct: leading companies may wish to pioneer the adoption of industry acceptable codes of conduct, including obtaining approvals from regulatory authorities such as the Information Regulator; and
  • Auditing and monitoring: as with any compliance initiative, boards should ensure that proper resources are dedicated to ensuring compliance with interventions adopted, as well as dealing with violates of company policies.

Regardless of whether a company deploys or utilises AI technologies in the workplace, it should ensure that it has adopted mechanisms for Responsible AI interventions and that such interventions are led from the very top.

The adoption of Responsible AI comes with several complexities, and expert guidance is often crucial in this process. In order to assist clients in fast tracking the adoption of Responsible AI, our expert TMT team have developed an AI Toolkit. We would strongly urge companies to engage with us in order to ensure that AI adoption is done so in a Responsible manner and that company risks are mitigated. For more information on our AI Toolkit, please contact:

Ridwaan Boda
Executive| Corporate Commercial
rboda@ENSafrica.com

Alexander Powell
Candidate Legal Practitioner | Corporate Commercial
apowell@ENSafrica.com