Last week, the European Parliament approved the Artificial Intelligence Act, the first of its kind in the world. As South African companies continue to adopt Artificial Intelligence ("AI") at a rapid pace, the natural question arises if, and when, will South Africa adopt AI legislation? South Africa currently has no explicit legal framework for AI.

AI can pose different types of risks, depending on the technology deployed. The lack of AI regulation could introduce numerous risks such as:

  • Use Cases and Risk Categorisation – In the EU, four risk-based categories have been created: Unacceptable Risk (which has been banned), High Risk (which is subject to strict regulatory requirements), Limited Risk and Minimal Risk.
  • Use Cases and Risk Management - Not all AI can be regarded in the same way when it comes to AI risk management. By way of example, companies considering the use of a popular generative AI tool like ChatGPT are different from companies looking to implement AI as part of their core operations (whether internal or client-facing) or to develop bespoke AI systems.
  • Data Privacy – AI introduces a new set of privacy issues. Privacy issues regarding data breaches are a common challenge and predate the emergence of AI, however, the advancement and robustness of AI models have the capacity to unmask anonymised data through inferences (i.e. deducing identities from behavioural patterns). AI may therefore be able to leak sensitive data directly or by inference.
  • Cybersecurity – While traditional cyber threats from human and software failures do exist, AI presents an increased scope for cyber threats. These threats emerge from the manipulation of data using AI and the exploitation of the inherent limitations within AI algorithms.
  • Explainability and Complexity of AI – explainability of AI systems outcome is a challenge, especially in the financial sector. As a direct result of being complicated and multifaceted, AI models are often referred to as "black boxes". This hinders the ability to detect the appropriateness of AI decisions thereby exposing organisations to vulnerabilities such as biased data, incorrect decision-making, and unsuitable modelling techniques.
  • Embedded Bias – embedded bias is defined as computer systems that systematically and unfairly discriminate against certain people in favour of others. Customer classification processes used in AI can result in bias, including in the financial sector.
  • Intellectual Property – globally, lawmakers and courts continue to grapple with issues around AI inventions and IP ownership, and organisations themselves often struggle to deal with ownership of AI development and AI output.
  • Hallucinations – generative AI has been known to create "hallucinations" such as generating fake case law or fake information.
  • Prompt Hacking – prompt hacking may result in illegal behaviour being perpetrated using company resources. For example, an AI tool can be tricked into providing advice such as how to dispose of a dead body or how to bypass certain security protocols.

Regulating AI is therefore crucial in ensuring that AI achieves outcomes that are in the interests of society. While AI remains largely unregulated in South Africa, existing legislation like the Protection of Personal Information Act, 2013, does regulate some activities conducted by organisations using AI, by preventing the unlawful processing of personal information.

Despite the current lack of regulation, there are some positive indications that South Africa aims to be a competitive player in the global AI space. In November 2022, The Department of Communications and Digital Technologies ("DCDT") launched the Artificial Intelligence Institute of South Africa and AI hubs (University of Johannesburg and Tshwane University of Technology).

South Africa's ambition to be a player in the global AI space necessitates a regulatory regime that can regulate the robustness of AI and the possible threats that it may impose on individuals and organisations.

Until such time as regulation comes into force, companies can and should self-regulate by adopting responsible AI measures. These include a combination of training, policies and procedures, ethics and governance structures, AI ethics assessments and proper contracting for AI services and products. Our specialist team at ENS has created a "Responsible AI Toolkit" which aims to guide organisations to implement AI in a responsible and practical manner, depending on the use cases for AI.

It is important to note that Boards in South Africa remain accountable for IT governance. By allowing AI to be used and implemented in an organisation without any governance mechanisms will, in the absence of regulation, not only create legal risk but also reputational, commercial and financial risk and, in some cases, embarrassment.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.