Rimon partner John Isaza was recently interviewed by AI Time Journal, sharing his insights on AI in the legal landscape, the full article is below.

"We thank John Isaza, Esq., FAI, partner at Rimon Law, who shared his story and valuable insights on various aspects, including the evolving legal landscape, the delicate balance between privacy protection and innovation, and the distinct legal implications that arise when integrating AI tools.

John provides valuable perspectives on the challenges and considerations related to AI technologies like ChatGPT, emphasizing the significance of data governance, ethical usage, and compliance with privacy laws. Furthermore, he shares his firsthand experience in launching Virgo, a cloud-based software solution designed to address the specific needs of privacy and information governance.

Exploring the Intersection of Law, Privacy, and Innovation: Insights and Strategies

What sparked your interest in the intersection of law and privacy/information governance?

I have always been drawn to uncharted territories to keep things interesting. I got my start in IG back in 2001 when records and information management was primarily focused on paper records practices. But, I had a boss that encouraged me to focus on electronic records and data, which he saw as the wave of the future. So, I decided to become an expert in all things electronic. This led me to various leadership positions, including Chair of the Information Governance Professional Certification board. This Board was tasked with overseeing the transition of the records management industry into the broader information governance, which includes privacy amongst other key disciplines like data security, e-discovery, systems architecture, infrastructure, and traditional records management.

How do you stay updated with the latest developments and changes in privacy laws and regulations?

This is no small task. Trade organizations like ARMA, the ABA, and the IAPP are great resources to track the latest developments. As Chair of the ABA's Consumer Privacy and Data Analytics Subcommittee, I also have the benefit of tapping into the talents and experience of various legal professionals who are keenly interested in the topic. We collaborate often on publications and speaking engagements that force us to stay on top of the latest developments.

How do you approach the balance between privacy protection and enabling innovation in the digital age?

This is where my experience as an entrepreneur is most helpful. We have to balance strict and sometimes draconian regulatory measures against the realities of keeping the lights on and turning a profit. As a legal counselor, my job is to point out to clients their options and the consequences associated with each option. Ultimately, for clients, the issue of privacy compliance comes down to a risk-based decision, such as whether and how large the target might be on their back based on their offering in the market.

Launching Virgo: Cloud Software Addressing Privacy and Legal Compliance

What motivated you to launch your cloud-based software, Virgo, and how does it address the needs of privacy and information governance?

Virgo informs global legal requirements for not only records retention but also privacy regulations, format, location, disposition, and statute of limitations requirements. These regulations are then mapped to the records of the organization in what we call "big buckets' that would be assigned specified retention periods informed by the mapped regulations that apply to a given bucket, in addition to best practices considerations.

On the whole, Virgo manages the records retention schedule of the organization, which is the first line of defense not only for e-discovery but also to justify retention in the face of privacy requests to delete or the general privacy mandate to dispose of when no longer needed.

I co-founded Virgo when it became too unwieldy to manage the hundreds of thousands of regulations in this space while trying to map them to the records of each organization. Interestingly, we managed to stay competitive against the likes of global law firms like Baker & McKenzie by leveraging translation tools that were the precursors to modern AI tools. Our research was not only better but at a fraction of the price that huge law firms might charge a client.

With your experience in launching a cloud-based software, Virgo, how did you navigate the legal and compliance aspects related to privacy and data protection?

Privacy and data protection became increasingly important with bigger companies subscribing to our software such as Boeing, Microsoft, or NASA. Each came with strict data security compliance requirements which forced us to adopt the strictest security standards and thereby made it easier to sell the software across the board. The first few were extremely painful, but it got much easier thereafter. Once you are compliant with the high watermark requirements, it makes it much easier to navigate local or regional requirements.

The Legal Landscape with AI: Challenges and Trends

As an expert in electronic discovery, privacy, and information governance, how do you see the legal landscape evolving with the increasing use of AI technologies like ChatGPT?

The legal landscape is already starting to take shape, led by the European Union with its proposed AI Act. The AI Act lays out a good starting regulatory framework that foreshadows where other countries might go next in seeking to harness and put guardrails around the usage of AI. The reality is that AI providers will need to get used to navigating possibly conflicting regulatory mandates, which will lead to a risk-based approach similar to what I just described regarding privacy compliance.

Can you share some insights on the unique legal challenges and considerations that arise when integrating AI technologies into applications such as ChatGPT?

First, let's distinguish between public AI tools and private ones. The public AI tools (such as ChatGPT, Google's Bard, Microsoft's Bing, or Dall-E) pose the biggest challenges when it comes to the integrity of the data, as the sample data could be drawn from unvetted data mined over the years from the public internet. This leads to concerns such as not only the validity of the results but also whether it presents copyright, trademark, and other legal liability issues. Public AI tools also present serious confidentiality challenges that organizations need to nip in the bud right away via policies and training to stop employees from entering private or confidential information into public AI tools that essentially ingest any data entered for keeps and for anyone in the world to see.

The challenges for private AI tools lie primarily with the usage of clean and accurate training data sets so as to avoid the old "garbage in, garbage out" dilemma. In both instances, the tools need to be tested by humans to vet for biases that could lead to defamation or discrimination by the algorithm.

What are some key legal and regulatory frameworks that businesses should be aware of when implementing AI technologies like ChatGPT?

At present, there are not many regulatory frameworks, other than the EU AI Act, which is still going through the approval process. New York City also has a law in place, but as you can guess much more is yet to come at the state and even federal U.S. level.

For the time being, I would pay close attention to the EU AI Act, which as I mentioned earlier seems to have a good starting framework to help at least set priorities for which AI usages are considered highly sensitive and therefore subject to tighter scrutiny.

Are there specific industry sectors or use cases where the legal implications of ChatGPT integration are more pronounced? If so, could you provide some examples?

Simply by looking at the EU AI Act, one can quickly discern the usages that will get the closest scrutiny. For instance, so-called "high-risk" AI system applications include critical infrastructure that could endanger a citizen's life or health, a person's access to education or career path that could influence educational or occupational training, robot-assisted surgery, employment recruitment, credit scores, criminal evidence evaluation, immigration, asylum or border control determinations, and application of the law to a certain set of facts.

The AI Act also enumerates "limited risk" and "minimal risk" examples, in addition to "unacceptable risk" systems that would be banned such as those that would exploit human behavior. The devil will be in the details when it comes to enforcement, but as I have mentioned this is the start of a framework for regulatory enforcement and therefore guidance.

In terms of data governance, what best practices do you recommend for organizations that are leveraging AI technologies to ensure compliance with data privacy laws and regulations?

Here is a checklist of what I have been recommending to organizations:

  • Track international laws aimed at putting controls around AI usage
  • Stay vigilant for errors in the data, and usage of protected IP, especially images and audiovisual
  • Include anti-bias language obligations in any generative AI contract
  • Contractually obligate all vendors not to use AI without human fact-checking
  • Obtain commitments and contracts with assurances about the legitimacy of training data use
  • Users of AI need to be careful that the output of AI does not show biases to trigger discrimination laws
  • Use Explainable AI (XAI) to understand assumptions
  • Heed usage of AI for employment decisions, credit valuation, medical resources, incarceration in particular
  • Generative AI models need to be monitored at both the training stage and the development of outputs

In terms of internal usage, I also recommend:

  • Assess the current usage of AI within your organization
  • Determine the highest and best use of AI in your organization
  • Training to remind staff and vendors not to use sensitive data with external/public AI tools
  • Create guardrails through policies around the use of AI, and revise existing policies that could intersect with AI
  • Review vendor agreements that could involve AI use
  • Assess changes to products or services or business models that could benefit from AI usage

How can businesses address potential biases and discrimination that may arise from AI models like ChatGPT to ensure fairness and avoid legal issues?

The best advice I would give here is to make sure there is a human review of all input and output, especially if the output will be used for critical functions of the organization or to publish to the outside world. Be especially careful if the algorithms will be used to make hiring, promotion, pay increase, or termination decisions. Likewise, credit scoring, appraisals, or other potential usages that could impact a person financially should be vetted with extra care.

Ensuring Ethical and Legal Usage of AI-Powered Tools

How can businesses ensure that they are ethically and legally using AI-powered tools like ChatGPT while respecting user privacy and data protection laws?

In this space, as with previous hot technologies like email, social media, instant messaging, and texting, organizations need to put guardrails in place via policies, procedures, and training on the subject of employee usage. In terms of developing AI applications, policies, procedures and guidelines also need to be implemented to assure data hygiene at the input level and vetting of results at the output level.

Are there any specific legal or ethical guidelines that developers and product managers should follow when integrating ChatGPT into their applications, particularly in sensitive domains like healthcare or finance?

The response to question 11 is a start here. Note also that the usage of ChatGPT is an example of the usage of a public tool, which means that any data fed into it will go public. Therefore, the biggest legal or ethical concern with the usage of a public AI tool is the potential loss of confidentiality or trade secrets. If a public AI tool is incorporated into the business, be on the lookout for the protection of trade secrets and confidentiality.

Looking Ahead

What legal trends can be expected in AI and chatbot technologies, and how should organizations navigate these complexities?

I anticipate a flood of regulations in this space, so for the time being stay on top of every single proposed regulation or issued guideline. This will inform the hot spots, and whether your business model is likely to have a target on its back based on what is being regulated. Along those lines, pay close attention to what federal agencies like the Federal Trade Commission are saying or informing on the topic."

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.