1 Legal and enforcement framework

1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?

There is no overarching statutory law or regulatory law specifically governing AI in Hong Kong.

As AI often involves the use of data sets containing personal data of individuals during problem solving and training, the Privacy (Data) Protection Ordinance (PDPO) applies to AI. Under the PDPO, a 'data user' is defined as anyone that controls the collection, holding, processing or use of data. The user of an AI platform that collects, holds or processes data will therefore be regarded as a data user and will be subject to the six data protection principles stipulated by the PDPO. Anyone creating and operating AI handling personal data must comply with these principles at all times.

The Hong Kong government has also published a few guidelines to govern and facilitate the ethical use of AI technology, especially when personal data is involved. For example, the Office of the Privacy Commissioner for Personal Data (PCPD) has issued its Guidance on the Ethical Development and Use of Artificial Intelligence to facilitate the ethical and healthy development of AI in Hong Kong, while helping organisations – both private and public – to comply with the requirements of the PDPO in the process. In addition to outlining seven ethical principles, the guidance provides a four-part practice guide to facilitate the management of AI systems throughout an organisation's lifecycle.

Similarly, in August 2023, the Office of the Government Chief Information Officer published the Ethical Artificial Intelligence Framework for organisations to reference the principles and practices when implementing IT projects and services. The framework outlines the ethical considerations in the planning, design and implementation of IT projects or services, and is also applicable to other organisations in general. The ethical AI principles set out in the framework include, among other things:

  • "transparency and interpretability";
  • "reliability, robustness and security";
  • "lawfulness and compliance"; and
  • "data privacy".

Apart from the PDPO, the existing legislation of Hong Kong and common law also apply to certain aspects of the use of AI in Hong Kong. For example, under Sections 11 and 13 of the Copyright Ordinance (Cap 528), the person that creates a work is regarded as the 'author' of the work and owns any copyright contained therein. However, under Section 11(3), the author of a 'computer-generated work', which includes AI work, is "the person by whom the arrangements necessary for the creation of the work are undertaken". The use of the word 'person' makes it abundantly clear that computers or AI are not considered as the 'authors' of the work that the technology generates, at least under copyright law in Hong Kong. It remains to be seen whether regulations and laws of Hong Kong will evolve accordingly to accommodate and adapt to the increasing prevalence of AI and issues (including those of privacy and copyright) arising therefrom.

1.2 How is established or 'background' law evolving to cover AI in your jurisdiction?

As far as the statutory law is concerned, the position is that the PDPO applies and regulates the use of personal data by AI.

As for common law, as a matter of principle, the Hong Kong courts will interpret legislation or common law principles liberally to address a novel scenario with reference to, for example, the legislative intent so as to avoid reaching an unreasonable conclusion. This principle also applies if a case involving AI is heard by the courts where the issue in question is not specifically governed by any legislation.

There have been no reported cases in Hong Kong in which AI has become the subject of contention.

However, in August 2023, six people were arrested by the Hong Kong police for creating doctored images using AI for loan scams targeting financial institutions. This is the first time that the police have discovered deepfake technology being used to deceive financial institutions. The suspects were detained on suspicion of conspiracy to defraud, contrary to common law. It remains to be seen, should the case proceed to litigation, whether the Hong Kong courts will finally indicate their stance on the regulation of AI (or the lack thereof).

1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?

Yes, any user of AI is expected to exercise a duty to take reasonable care when using AI. Hong Kong courts adopt the principle in the English landmark case of Donoghue v Stevenson that the categories of negligence are not limited. As such, the Hong Kong court will find a user of AI to be in breach of duty of care if the following elements can be satisfied:

  • The respondent (ie, the user of AI) has reasonably contemplated that the actions to be taken by AI would injure the plaintiff;
  • Proximity exists between the respondent and the plaintiff;
  • The imposition of duty of care is just and reasonable; and
  • The imposition of duty of care is consistent with public policy considerations.

1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and 'escape' and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?

Yes, the general law is applicable by analogy in Hong Kong. Robots and mobile AI may cause nuisance or damage due to faulty design, or because the AI goes rogue and escapes from control.

(a) Faulty design

Our answer to question 1.3 on the law of negligence applies to robots or AI causing nuisance or damage due to faulty design.

(b) Robot goes rogue and escapes

The law of nuisance, as well as the doctrine in Ryland v Fletcher (ie, if a person keeps something that would likely cause mischief on its property, that person will be liable for any natural consequence flowing from the object's escape), apply to the situation where AI goes rogue and escapes control.

In Hong Kong, it is established law that nuisance applies to trivial matters such as cold air from air conditioning and water leakage. If a robot goes rogue, intrudes and causes nuisance on a neighbour's property or otherwise hampers the enjoyment of such property, as long as all of the requisite elements of nuisance can be proved to the satisfaction of the court (ie, there is a real interference with the convenience or comfort of living according to the standards of an average person), there is no reason why the court would not hold the user of AI liable for nuisance.

1.5 Do any special regimes apply in specific areas?

There are no special regimes in Hong Kong, except that the PCPD's Guidance on the Ethical Development and Use of Artificial Intelligence specifically applies to the collection and use of personal data by AI. Please see question 1.1.

1.6 Do any bilateral or multilateral instruments have relevance in the AI context?

Hong Kong is a co-sponsor of the Declaration on Ethics and Data Protection in AI dated 23 October 2018. The declaration enshrines several general principles relating to the use and development of AI, including the following:

  • AI should be designed, developed and used to respect human rights and in accordance with fairness;
  • Continued attention and vigilance should be ensured in the development of AI;
  • The use of AI should be transparent;
  • AI should be designed and developed responsibly, applying principles of privacy by default and by design;
  • AI should be developed and used to empower individuals;
  • Use of AI should not result in biased or discriminatory results; and
  • Use of AI should be subject to governance principles.

Hong Kong is also a sponsor of the Resolution on Accountability in the Development and Use of AI, which was adopted in October 2020. The resolution sets out more specific principles regarding the development and use of AI.

While these instruments are not legally binding or enforceable, the principles that they espouse show where the Hong Kong government stands and provide guidance on the use and development of AI in Hong Kong. The principles mentioned in the above instruments have been incorporated into the Guidance on the Ethical Development and Use of Artificial Intelligence.

1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?

There is currently no authority designated to monitor the use and development of AI in Hong Kong.

As far as the collection and use of personal data in the course of training or operating AI is concerned, the PCPD is the authority responsible for enforcing privacy law in Hong Kong. Among other things, the PCPD is empowered by the PDPO to:

  • monitor entities to ensure compliance with the PDPO;
  • conduct investigations if a breach of the PDPO is suspected;
  • inspect specific personal data systems;
  • issue and publish investigation reports;
  • issue an enforcement notice or warning to demand rectification by the parties in breach; and
  • refer certain cases to the police for further action.

Similarly, the Customs and Excise Department is responsible for enforcing the Copyright Ordinance and any claims arising therefrom; its responsibilities include conducting investigations into any claim or allegation of copyright infringement arising from the use of generative AI. When investigating alleged infringements, the Customs and Excise Department has:

  • the power to confiscate suspected copies of infringement (irrespective of whether a charge has been laid yet); and
  • extensive powers of search and seizure.

1.8 What is the general regulatory approach to AI in your jurisdiction?

There are no overarching legislation or regulations in Hong Kong governing AI, although the PDPO and the Guidance on the Ethical Development and Use of Artificial Intelligence regulate the use of personal data by AI. AI is mainly regulated by the existing legislation (to the extent legally permissible) and by certain non-binding guidelines, including:

  • the Guidance on the Ethical Development and Use of Artificial Intelligence issued by the PCPD, which sets out the best practices on the development and use of AI;
  • the Ethical Artificial Intelligence Framework published by the Office of the Government Chief Information Officer to incorporate ethical elements in the planning, design and implementation of IT services and projects; and
  • the internal guidelines issued by the Office of the Government Chief Information Officer to government agencies on the use of AI.

2 AI market

2.1 Which AI applications have become most embedded in your jurisdiction?

In Hong Kong, data analysis and process automation are perhaps the most commonly used AI applications.

AI is capable of processing huge amounts of data quickly. In the banking industry, for example, AI is used to:

  • identify suspicious transactions within the context of anti-money laundering assessments; and
  • give financial advice to clients based on an analysis of historical market data.

AI is also used to handle repetitive procedures. AI-powered chatbots help to handle queries from customers and provide recommendations in e-commerce and the banking industry.

2.2 What AI-based products and services are primarily offered?

In Hong Kong, AI-powered products and services are mainly used by businesses in the banking, e-commerce and educational sectors.

(a) Banking

Banks offer a range of services powered by AI. AI is incorporated into algorithms to conduct trading and provide financial advice to clients. AI is also used to automate repetitive operational processes, such as preparing and reviewing trade finance documentation, by using its deep-learning based techniques to instantly extract relevant information from trade documents. AI-powered chatbots are also widely used by banks to answer customer queries.

(b) E-commerce

In e-commerce, AI is used to power product recommendation tools, as well as chatbots to screen, triage or even handle consumer queries.

(c) Education

In February 2023, the University of Hong Kong (HKU) introduced an interim policy declaring that any unauthorised use of AI services such as ChatGPT and Dall-E in the submission of coursework will be treated as plagiarism. The Education University of Hong Kong and the Hong Kong University of Science and Technology followed suit, granting 'limited approval' to students' use of AI.

However, in August 2023, HKU lifted its ban. Students are now allowed to use AI without having to enable a virtual private network; and every month, each student is given a quota of 20 'prompts' on ChatGPT to guide them on coursework (or for other educational purposes), provided that the students make appropriate declarations when they submit their work. HKU has also issued leaflets for teaching staff to elucidate the proper approach and use of AI services in course assessments, while providing online and on-site technical support.

2.3 How are AI companies generally structured?

Most AI companies are private companies limited by shares. Under the Companies Ordinance (Cap 622), private companies limited by shares must, by their constitution (ie, the articles of association):

  • restrict the right of shareholders to transfer their shares;
  • prohibit any invitation to the public to subscribe for any shares; and
  • not have more than 50 shareholders.

The companies developing AI products and AI-powered services are commonly private start-ups. As of September 2023, there were 275 AI start-ups in Hong Kong. There are, however, exceptions, such as SenseTime (focused on computer vision and deep learning), which is a Cayman company headquartered and listed in Hong Kong.

2.4 How are AI companies generally financed?

As with private start-ups, most AI companies in Hong Kong are funded by seed capital or venture capital. As their businesses mature, some start-ups may eventually go public, as in the case of SenseTime, which listed on the Hong Kong Stock Exchange in December 2021 (see question 2.3).

Funding support is also available from various government funds in Hong Kong. The Innovation and Technology Fund (ITF), for example, grants monetary support for different AI projects. In 2017, the Hong Kong government set up a HK$2 billion Innovation and Technology Venture Fund to co-invest in local start-ups with other private venture capital funds on a matching basis.

There are also private funds in the market which finance AI companies. In May 2018, Alibaba, SenseTime and the Hong Kong Science and Technology Parks Corporation launched the HKAI Lab, an accelerator programme that allows interested and qualified AI companies to apply for funding after they pass an evaluation by Alibaba and/or SenseTime.

2.5 To what extent is the state involved in the uptake and development of AI?

The Hong Kong government is supportive of the development of AI in Hong Kong, with a view to promoting Hong Kong as an Asian tech hub.

In terms of providing financial support, the Hong Kong government helps AI companies and projects in the following manner:

  • The ITF provides funding support for various types of research, including the development of an AI chatbot and a smart investment platform which provides investment options powered by AI.
  • In the 2022-2023 budget, the Hong Kong government committed to providing various kinds of financial support for R&D activities, including the setup of a Future Fund to invest in tech projects that are of strategic value to Hong Kong.
  • The Mainland-Hong Kong Joint Funding Scheme was launched to encourage and support R&D collaboration among research institutes, technology enterprises and universities in Hong Kong and the mainland. It invites proposals for applied R&D projects jointly conducted by mainland and Hong Kong institutions under three specific themes:
    • new materials;
    • biotechnology; and
    • AI.
  • The Fund for Innovative Technology-in-Education was launched by the University Grants Committee to enrich student learning experiences and foster a technologically responsible and digitally competent generation. On 28 June 2023, it was announced that the funding allocation would be HK$100 million.

Apart from providing financial support, the Hong Kong government has also committed to nurturing talent in the R&D field – for example, by creating scholarships and internship initiatives.

Existing infrastructure, such as the Science Park and Cyberport, also provide a friendly environment for AI companies to connect with other businesses and service providers.

3 Sectoral perspectives

3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?

(a) Healthcare

As of September 2023, there were 22 AI healthcare startups in Hong Kong. Businesses in the healthcare industry are increasingly reliant on AI to help manage data, in particular patients' data. In Hong Kong, the Electronic Health Record Sharing System (eHRSS) stores and shares patients' medical records among private and public healthcare providers. In addition to the Privacy (Data) Protection Ordinance (PDPO), the eHRSS Ordinance was promulgated to:

  • regulate the relationship between data users and data subjects in a clinical setting; and
  • address the collection, sharing, use and protection of patients' health records.

Telemedicine is also growing in Hong Kong. Certain aspects of telemedicine feature the use of AI – for example, reviewing imaging scans. The Medical Council of Hong Kong has issued the Ethical Guidelines on Practice of Telemedicine, which state that only medical professionals registered in Hong Kong under the Medical Registration Ordinance can provide medical advice, which effectively prohibits the provision of medical advice generated by AI without supervision by qualified medical practitioners.

(b) Security and defence

AI has been used by law enforcement in Hong Kong for a number of years. For example, the Hong Kong police use different AI systems (eg, facial recognition) to help with law enforcement – such as by monitoring illegal parking through licence plate recognition.

The major concern arising from the use of AI by law enforcement is the mass mining of personal data – in particular, facial images and other biometric data. The Hong Kong government declared that data subjects should be granted the right to opt out of any decision made against them based solely on information processed by AI, including 'profiling', which is targeted at the detection of certain traits or characteristics.

Use of AI by law enforcement should also comply with the existing laws and regulations on law enforcement. For example, the Interception of Communications and Surveillance Ordinance requires law enforcement agencies to strike a balance between the right to privacy and law enforcement needs in interceptions or covert surveillance. Any use of AI for either of these must comply with the requirements set out in the Interception of Communications and Surveillance Ordinance.

(c) Autonomous vehicles

Autonomous vehicles are still at an early stage in Hong Kong and are not seen on the roads. Generally, vehicles that do not comply with the technical standards in terms of design, construction and operation for vehicle registration and licensing requirements under the current regulatory framework cannot be driven on the road. Autonomous vehicles are currently legally not roadworthy, partly due to the risks posed by the use of unapproved auto-lane changing and auto-steering technology used by autonomous vehicles.

That said, the Transport Department has issued movement permits under the Road Traffic Ordinance and also issued the Guidance Note on the Trials of Autonomous Vehicles in 2020 to enable trials of autonomous vehicles in Hong Kong. In the Hong Kong Smart City Blueprint 2.0, issued in December 2020, one of the 'smart mobility initiatives' is to promote and facilitate the industry development and technological advancement of autonomous vehicles, with the objective of expediting the trial and use of autonomous vehicles in the city. However, it remains to be seen how the regulatory regime will keep up with autonomous vehicles in the future – particularly regarding:

  • design and specification;
  • registration and licensing requirements; and
  • insurance policies.

On 2 December 2022, the Road Traffic (Amendment) (Autonomous Vehicles) Bill 2022 was gazetted to develop a new regulatory framework to facilitate the wider trial and use of autonomous vehicles in Hong Kong, primarily focusing on licensing regulations.

(d) Manufacturing

There is no specific legislation in Hong Kong that addresses manufacturing using AI. While goods sold in Hong Kong are generally governed by the Sale of Goods Ordinance and the Consumer Goods Safety Ordinance, depending on the type and nature of the goods in question, other legislation may also be applicable. For example, the Toys and Children's Products Safety Ordinance specifically regulates the safety of toys and goods for children.

Manufacturers in Hong Kong are also bound by the Factories and Industrial Undertaking Ordinance, which governs the health and safety of personnel in an industrial setting. The obligations imposed on manufacturers include, among others:

  • proper provision and maintenance of plant and work systems; and
  • provision of all necessary instructions, training and supervision to ensure health and safety.

These obligations also apply to those using AI in the manufacturing processes, which means that they need to provide proper training and instruction to staff on issues such as:

  • mechanical errors arising from the use of AI; and
  • the dangers that malfunctioning AI may pose.

Manufacturers must also monitor AI-powered manufacturing processes and inspect the end product to ensure they meet statutory standards.

(e) Agriculture

Agriculture on a commercial scale is basically non-existent in Hong Kong. There is no regulatory intervention from the Hong Kong government into the use of AI in agriculture in Hong Kong.

That said, the Hong Kong government is currently building an agricultural park covering over 7.5 hectares of land. It remains to be seen whether and how the potentials of agritech (a term coined for 'agricultural technology') will be explored after the completion of the agricultural park.

(f) Professional services

The banking and financial industries are the biggest users of AI in Hong Kong. AI is widely used in their daily operations and delivery of services to clients. AI is also applied in their risk management procedures, such as detecting anti-money laundering activities and due diligence on fund transfers.

The city's de facto central bank, the Hong Kong Monetary Authority (HKMA), has issued circulars to provide guidance on the use of AI in the banking and financial industries. The HKMA has warned that over-reliance on AI may compromise proper validation expected from financial institutions. Financial institutions are reminded to ensure that their obligation to properly assess the financial capabilities of clients is not compromised by the use of AI; nor does the use of AI mitigate the financial institutions' liabilities from the consequences of any conduct.

(g) Public sector

As set out in questions 10.2 and 12.1, the Hong Kong government is pushing to incorporate technology (AI being a prominent part) into public services according to the Smart City Blueprint for Hong Kong 2.0 initiative. One of the key highlights is to streamline government services by developing a hub that allows citizens to submit and process their building plans and apply for licences online under the Be the Smart Regulator Programme and the Streamlining of Government Services Programme respectively.

During the COVID-19 pandemic, the Hong Kong government launched the StayHomeSafe and LeaveHomeSafe mobile apps, to ensure that members of the public complied with pandemic measures and laws by using big data analytics and AI technology. Members of the public raised concerns about the mass mining of personal data; but the Office of the Privacy Commissioner for Personal Data gave assurances that a balance had been struck between privacy rights of patients and public health, and that these applications were in compliance with the PDPO.

4 Data protection and cybersecurity

4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The data protection regime in Hong Kong is the Privacy (Data) Protection Ordinance (PDPO).

The PDPO is highly relevant to AI companies in general. The Office of the Privacy Commissioner for Personal Data (PCPD) has made it clear that the PDPO applies to the collection and use of personal data by AI. Please see questions 1.1 and 1.7 for further details.

4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?

There is no separate cybersecurity regime promulgated in Hong Kong. The issue of cybersecurity is currently very much addressed in the context of personal data protection in accordance with the PDPO and the Guidance on the Ethical Development and Use of Artificial Intelligence.

What is particularly relevant to AI companies and applications is Data Protection Principle 4 stated in the PDPO, under which all data users must ensure that collected personal data is protected against unauthorised or accidental access or processing. This obligation applies even if an entity transfers personal data to a data processor for processing.

On 30 August 2022, the PCPD published a Guidance Note on Data Security Measures for Information and Communications Technology, which:

  • recommends data security measures to facilitate compliance with the PDPO; and
  • provides good practices to strengthen data security systems.

The guidance provides recommendations on data security measures in seven areas, as follows:

  • data governance and organisation;
  • risk assessments;
  • technical and operational security;
  • data processor management;
  • remedial actions in the event of data security incidents;
  • regular monitoring, evaluation and improvement of compliance with data security policies; and
  • recommended data security measures for:
    • cloud services;
    • 'bring your own devices'; and
    • portable storage devices

While a failure to comply with the recommendations of the guidance note does not constitute a breach of the PDPO per se, organisations are strongly advised to follow them because the PCPD will assess compliance with the PDPO on the basis of the recommendations and guidance provided for in the guidance note.

In its 2021 Policy Address, the Hong Kong government announced that a consultation on new cybersecurity legislation would be held at the end of 2022. The Office of the Government Chief Information Officer (OGCIO) and the Innovation and Technology Bureau subsequently submitted a paper outlining more details on the new cybersecurity legislation. According to the paper, the new law aims to:

  • clearly define the cybersecurity responsibilities of operators of critical infrastructure; and
  • strengthen the data protection of the city's network systems.

The government has indicated that, in preparation for the new cybersecurity legislation, it will make reference to relevant legislation in other jurisdictions and focus on seven areas, as follows:

  • cementing a preventative management for critical information infrastructure;
  • preparing a cybersecurity plan;
  • regularly conducting security assessments;
  • establishing an incident response plan;
  • conducting frequent drills;
  • strengthening resilience; and
  • establishing a prompt notification mechanism.

The Hong Kong Computer Emergency Response Team Coordination Centre has warned that the growing popularity of AI will make cyberattacks more sophisticated and more difficult to handle in the future. The introduction of new cybersecurity legislation in the near future in Hong Kong reflects the government's commitment to taking a more proactive role in regulating the AI industry. It remains to be seen what the AI companies and applications will be required to do following the introduction of the new cybersecurity law.

5 Competition

5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?

The Consumer Council of Hong Kong observed that small and medium-sized enterprises (SMEs) are likely disadvantaged by their limited access to data in comparison with big corporations. A study by the Consumer Council also discovered that the sheer volume of personal data collected by large tech companies means they tend to have more effective and accurate AI systems, which consolidates the edge that these large tech companies have over the SMEs in the market.

The Consumer Council suggested that the Hong Kong government should introduce policies to ensure that:

  • SMEs have more access to data to level the playing field; and
  • relevant corporate stakeholders and organisations could create a platform for SMEs to share and use data within the industry.

These suggestions have been followed up by the Business Facilitation Advisory Committee, which advises the Hong Kong government on business policies.

6 Employment

6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?

From an employment perspective, AI impacts the local workforce in the following ways.

Impact on the workforce: As with other economies, AI competes with human labour and makes some jobs obsolete. Research shows that AI technologies will leave 800,000 in the Hong Kong workforce out of work or looking for a new job by 2028. Positions such as clerks, administrative staff and customer service representatives are among the jobs most vulnerable to rapid replacement by AI.

One way to cope with the challenge posed by AI is to equip the workforce – especially the next generation – with the skills needed to make good use of AI. School curricula at the pre-secondary and secondary level now focus on building a solid foundation among students. At the post-secondary level, there are university courses and placement programmes that train university students to use AI.

AI in recruitment: More recruiters in Hong Kong use AI to help in the recruitment process, which may become increasingly susceptible to AI-induced biases and discrimination. There are reports that AI-recruitment tools have discriminated against female candidates, as the historical data learned by the system mostly comes from men's CVs.

AI systems can be trained to be more gender neutral. Recruiters should also screen the results generated by AI and use their experience to adjust the recruitment decisions recommended by AI.

Employees' data: AI analyses personal data from job candidates in the recruitment processes as well as from employees. The Office of the Privacy Commissioner for Personal Data notes in the Guidance on the Ethical Development and Use of Artificial Intelligence that the use of AI in job recruitment poses a high risk to data subjects from a data protection perspective and stresses that human oversight is necessary.

7 Data manipulation and integrity

7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?

Data manipulation will likely impact on consumer rights and system security.

Consumer rights: The Consumer Council of Hong Kong is dissatisfied with the fact that businesses seldom inform consumers that recommendations are powered by AI and consumers are also not offered the option to switch off these AI-generated recommendations.

Security loopholes: According to the Hong Kong Computer Emergency Response Team, AI-powered tools are susceptible to membership inference attacks. Hackers may create an AI system similar to that operated by a business and then feed their own manufactured AI data similar to the private data used by the business. With sufficient training, it is possible the hacker's system can map out all the data utilised by the business's own AI system, thereby gaining access to private data.

AI is also prone to attacks by hackers – for example, injecting misleading data into AI so as to influence its behaviour.

8 AI best practice

8.1 There is currently a surfeit of 'best practice' guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?

There is no AI 'best practice' as such in Hong Kong; but the Guidance on the Ethical Development and Use of Artificial Intelligence issued by the Office of the Privacy Commissioner for Personal Data (PCPD) does list the values and principles that it recommends AI companies adhere to, as follows:

  • accountability;
  • human oversight;
  • transparency and interpretability;
  • data privacy;
  • fairness;
  • beneficial AI; and
  • reliability, robustness and security.

The Ethical Artificial Intelligence Framework recommends 12 ethical AI principles for all AI projects. These can be broken down as follows:

  • performance principles:
    • transparency and interpretability; and
    • reliability, robustness and security and
  • general principles:
    • fairness;
    • diversity and inclusion;
    • human oversight;
    • lawfulness and compliance;
    • data privacy;
    • safety;
    • accountability;
    • beneficial AI;
    • cooperation and openness; and
    • sustainability and just transition.

8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?

The PCPD expands on the seven principles listed in question 8.1 as follows:

  • Accountability: Organisations should be held responsible for decisions and actions resulting from the use of AI.
  • Human oversight: Human oversight should be in place to ensure that AI is making appropriate decisions from the data set.
  • Transparency and interpretability: Data users should, at the time of collecting data, disclose their data usage, protection and privacy practices.
  • Data privacy: Data governance policy should be in place within the data user to protect the privacy and proper usage of the collected personal data.
  • Fairness: Humans should intervene whenever appropriate to ensure that the results generated by AI are fair.
  • Beneficial AI: The use of AI should benefit both the data user and the wider community. Preventative measures should be in place to limit harms and risks that AI may bring about.
  • Reliability, robustness and security: Preventative measures should be established to avoid security breaches or malfunction such as malware, hacking, data poisoning and the like.

8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?

Data users should designate sufficient manpower with expertise in different areas (eg, computer science, legal and public relations) to constantly monitor and review internal AI governance, policy and procedures. Adequate training should also be provided to staff on the ethics and values to which the data user is committed. Reviewing AI policy based on the Self-assessment Checklist at Appendix A of the Guidance on the Ethical Development and Use of Artificial Intelligence issued by the PCPD would be a good starting point.

9 Other legal issues

9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?

AI brings about both benefits and challenges. One of the concerns is that if a party chooses to perform its contractual obligations using AI, the results can be unanticipated.

The benefits of AI are obvious. For example, computers work around the clock and can digest a large amount of information quickly. Given that computers are not prone to human errors, their productivity is considered consistent and reliable.

On the flipside, results generated by AI can be unfair or faulty. A glitch at the input stage may cause grave consequences. AI is also prone to hacking, so there are cybersecurity issues to consider. If a system goes down, the downtime can be extensive.

Contracting parties are recommended to take these potential risks into account and try to address these contingencies in the contract as far as possible, especially if it is anticipated that AI will likely be used by any party to perform its contractual obligations. The risks posed by the use of AI can hardly be eliminated; but better and more precise contractual language would help to mitigate, to a certain extent, the uncertainties and risks that may arise from the use of AI in the performance of a contract.

9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?

Using AI does not exempt the company concerned from being liable for damage to property or personal injury caused by the AI. Contractually, if a party fails to perform in accordance with the contract due to mistakes and errors caused by an AI malfunction or breakdown, that party is still in breach of the contract.

One way to mitigate the risks is to take out insurance with sufficient coverage for liabilities arising from the use of AI. As explained in question 9.1, it may also be possible to address the risks by including language that specifically addresses the use of AI and its potential consequences in the contract.

9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?

As a tool to filter, sort, classify and aggregate data, AI relies heavily on input. It cannot tell whether the data pool is unbalanced and whether criteria set by a software engineer are biased or discriminatory. Additional risks include misinformation, inaccuracy and harmful content from generative AI chatbots.

To mitigate these risks, human supervision should come into play to review the results generated by AI and check their validity and reliability against data input. Balanced regulations and security reviews would also provide more transparency on the data that AI systems are trained on.

10 Innovation

10.1 How is innovation in the AI space protected in your jurisdiction?

AI innovation is protected in Hong Kong by the following instruments:

  • copyright;
  • patents;
  • trademarks;
  • registered designs; and
  • layout design (topography) of integrated circuits.

10.2 How is innovation in the AI space incentivised in your jurisdiction?

The Hong Kong government is committed to promoting the city as a hub of innovation and technology. There are a number of programmes in place to utilise the advantages and resources brought about by mainland China's technological developments with the aim of:

  • promoting research and development;
  • nurturing talent; and
  • supporting start-ups and fintech.

These initiatives and measures include:

  • implementing a 5G network covering over 90% of the city;
  • allowing universities and research institutions in Hong Kong to apply for mainland science and technology funding for research use in Hong Kong;
  • setting aside $10 billion to further promote the development of life and health technology;
  • setting up a dedicated fund to finance local universities and research institutes to participate in national research and development projects;
  • introducing the Innovation and Technology Scholarship, which subsidises university students to take part in overseas/mainland attachment programmes, local internships, mentorship programmes and so on;
  • establishing the Innovation and Technology Venture Fund to co-invest $2 billion on a matching basis with selected private venture capital funds in Hong Kong;
  • setting up a $5 billion Strategic Tech Fund to invest in enterprises with good development potential to enrich the IT ecosystem in Hong Kong;
  • setting up the Fintech Proof-of-Concept Subsidy Scheme to provide financial incentives for financial institutions to partner with fintech companies to conduct proof-of-concept projects; and
  • providing regulatory sandboxes through the Hong Kong Monetary Authority, the Securities and Futures Commission and the Insurance Authority.

11 Talent acquisition

11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?

The employment regime in Hong Kong is relatively straightforward. The parties are free to negotiate the terms of the contract subject to the law. The relevant laws that govern employment in Hong Kong include:

  • the Employment Ordinance;
  • the Sex Discrimination Ordinance;
  • the Disability Discrimination Ordinance;
  • the Family Status Discrimination Ordinance; and
  • the Race Discrimination Ordinance.

It is not uncommon for jobseekers or employees to provide personal data to employers and AI may play a role in collecting such personal information. The Privacy (Data) Protection Ordinance governs the use of personal data and regulates the relationship between data subjects (jobseekers and employees) and data users (employers). The Office of the Privacy Commissioner for Personal Data has published the Code of Practice on Human Resource Management: Compliance Guide for Employers and Human Resource Management Practitioners to help employers to comply with the law.

For example, during the recruitment process, the employer should specify the purpose for which an applicant's personal data is collected (ie, for recruitment purposes only). The data collected should not be excessive and should be relevant to identifying and assessing candidates on their suitability for the position. During employment, the employer should grant employees the rights to access and correct their personal data. Information regarding the employee's disciplinary proceedings, performance appraisal and promotion:

  • should only be used directly for related purposes; and
  • should not be disclosed except where the third party requesting access has legitimate reasons to do so.

11.2 How can AI companies attract specialist talent from overseas where necessary?

Specialist talent can come to work in Hong Kong by applying for a visa under the General Employment Policy (GEP). The GEP requires applicants to have already secured employment in Hong Kong before application. The GEP also offers an entrepreneur track that allows any individual who plans to establish or join a business in Hong Kong to make an application.

Another avenue is an application for a visa under the Quality Migrant Admission Scheme (QMAS). Applicants under the QMAS are not required to have already secured employment in Hong Kong; but only applicants with particular skills or talents or with outstanding achievements (eg, winners of national or international awards) are qualified to apply. The QMAS is a point-based system, so factors including age, work experience and academic or professional qualifications are all relevant.

These two schemes are not targeted at talent in the AI industry in particular. Successful applicants will be granted a work visa based on conditions, so any change in employment status or personal information will need to be notified to the Immigration Department.

The Hong Kong government has also rolled out the Technology Talent Admission Scheme (TechTAS), a hybrid of GEP and QMAS targeting non-local talent in the research and development of certain industries, including AI. The TechTAS is a fast-track scheme, but the quota is more limited.

12 Trends and predictions

12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?

While AI is increasingly used by companies and organisations in Hong Kong, it is mainly used to power chatbots and marketing analytics to improve customer experience. A handful of shopping malls or commercial buildings have reportedly adopted AI for building management purposes, such as disinfecting and sanitising public areas. Autonomous vehicles are not yet roadworthy as such, but trials have been allowed. While currently, the full potential of AI has not been fully explored in Hong Kong, possible benefits brought about by the use of AI are apparent, with more businesses moving to integrate AI in daily operations to cut costs and increase efficiency. Banks are a good example of this, with AI being used to help with due diligence on account opening and detecting money laundering activities.

Currently, the laws and regulations governing AI are minimal. There is no overarching legislation regulating the use of AI. However, it is expected that Hong Kong will closely observe international advancements in this rapidly evolving field to ensure that it keeps pace with global developments. Some regulations and guidelines have already been issued, such as the Ethical Artificial Intelligence Framework published by the Office of the Government Chief Information Officer. The Hong Kong government is committed to the development of AI-related industries in the future. Therefore, it is anticipated the government and legislature will play a more active role in:

  • updating laws and regulations on the use of AI; and
  • safeguarding the interests of stakeholders and the community as a whole.

The general direction will hopefully become clearer after completion of the cybersecurity legislation consultation.

13 Tips and traps

13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?

At present, Hong Kong is rather light in terms of regulating AI and AI companies. Save for autonomous vehicles, which are not approved to run on the roads, the operations of AI companies are not regulated as long as they comply with the laws of Hong Kong. There is nothing that restricts established AI companies from overseas from launching businesses in Hong Kong. It is easy to incorporate a company and set up a business in Hong Kong. The tax rates for corporations are low. There is a pool of talent specialised in science and technology from local universities and current visa programmes facilitate the entry of foreign talent to work in Hong Kong. The cost of complying with the existing legal regime to launch an AI-related business in Hong Kong is relatively low compared with that in other jurisdictions.

That said, there are practical issues that an AI company looking to enter Hong Kong should consider, such as the high cost of living. Another issue is perhaps the unpredictability of the legal regime going forward. The Hong Kong government has yet to regulate the use of AI, but it is committed to taking a more proactive role in the near future. Hong Kong is a traditionally business-friendly market economy with minimal red tape, and this will remain the case in the foreseeable future; but it will be interesting to observe how the legal regime governing AI will evolve after the introduction of the new cybersecurity law.

Jasmine Chan and Sincere Chen contributed to this Guide.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.