On February 24, Meta released LLaMA and offered it to researchers at institutions, government agencies and nongovernmental organizations under a noncommercial license. The LLaMA was promptly leaked by the website 4Chan, and users promptly reposting the model on other sites. The community then began to tinker with the model, including adapting it to widely available hardware – achieving feats like running the 65-billion-parameter model on a single Nvidia A100, or implementing the 13-billion-parameter version on a MacBook Pro M2 with 64 gigabytes of RAM. Stanford researchers were also able to create a LLaMA variant named Alpaca that could run on a Raspberry Pi or even a Pixel 6 smartphone.

The LLaMA leak and corresponding post-leak achievements by the community have raised questions and discussions around the advantages of openness in AI research and democratization of these new technologies. Proponents view that transparency can help advance the field as a whole with collaboration and provides accountability to these new technologies from an ethics perspective, which remain from a technical standpoint largely inaccessible to the public due to hardware, industry expertise and technical knowledge requirements. The downside, as demonstrated by this leak, was the theft and misuse of IP, such as algorithms and datasets, which can be rapidly proliferated to bad actors given the speed at which the internet can spread information, datasets and software code. This discussion continues on an ongoing basis in light of the incredible feats achieved by the public, as well as the bad actors responsible for proliferating unlicensed content, in the wake of this "LLaMA drama."

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.