Researchers from three Chinese institutions, including two linked to the People’s Liberation Army’s Academy of Military Science, have developed an artificial intelligence tool for the military called ChatBIT using Meta’s open-source LLaMA large language model, Reuters reported.
According to the report, an academic paper published in June details how the researchers used an earlier version of Meta’s LLaMA 2 13B LLM and incorporated their own parameters to create the AI-powered platform intended for “dialogue and question-answering tasks in the military field.”
Reuters said the researchers, which include Geng Guotong and Li Weiwei from the AMS’ Military Science Information Research Center and the National Innovation Institute of Defense Technology and others from the Beijing Institute of Technology and Minzu University, claim that ChatBIT outperforms other AI models and is almost as capable as OpenAI’s ChatGPT-4. However, they did not specify or confirm if the AI model was used in service in any way.
Meta’s Rules
Meta released LLaMA, short for Large Language Model Meta AI, in February 2023 as a way of “democratizing access” to the technology. An open platform, the compact AI model can be used by the research community as a foundational model for various tasks and applications.
At the time, Meta said access to LLaMA “will be granted on a case-by-case basis to academic researchers, those affiliated with organizations in government, civil society and academia, and industry research laboratories” worldwide even as it urged AI stakeholders to commit to the responsible deployment of LLMs.
Molly Montgomery, Meta director of public policy, told Reuters that the People’s Liberation Army’s use of Meta’s AI models is unauthorized and violates the company’s LLaMA 2 acceptable use policy. As specified in the set of rules, LLaMA 2 should not be used for activities related to military, warfare, nuclear industries or applications, espionage, weapon development, or inciting and promoting violence.
AI Tech Investments
The research community fears powerful AI models could be used for nefarious activities, noting that the tools, widely available without government oversight, can process the vast data needed for a cyberattack or the development of a biological weapon.
The U.S. Department of Commerce’s plan to introduce new export restrictions on proprietary AI models is expected to prevent hostile nations from exploiting these advanced technologies, building on existing limits imposed on AI chips.
Worldwide, governments agree to be cautious and responsible when approaching AI for military use. In September, up to 60 nations endorsed a blueprint for action to govern AI’s use, but China was among 30 other countries that declined to back the document.