advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

OpenAI updates policy on “military and warfare”

  • An update to its policy made on 10th January shows OpenAI has seemingly changed its stance when it comes to using its technology for military and warfare applications.
  • This change in wording potentially points to military agencies taking a deeper interest in the potential of AI moving forward.
  • The company has still warned against the use of its technology for weapons.

Following some instability last year thanks to a leadership shake-up, it seems like operations at OpenAI are seemingly back to normal.

This as the company launched its online GPT store for developers last week, but in a rather more concerning development, OpenAI has updated its policy that has many worried its technology could be used by military agencies down the line.

As spotted by The Intercept (subscriber wall), the company updated its usage policies on 10th January.

The change that has everyone’s attention revolves around the company’s Universal Policies. More specifically it outlines what is prohibited in terms of using OpenAI technology to develop weapons.

Don’t use our service to harm yourself or others – for example, don’t use our services to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system,” it explains.

As Engadget points out, there is a crucial omission here, as it has removed the mention of “military and warfare” with regards to its Universal Policies.

We are yet to officially see Large Language Models (LLMs) being used in the development of weapons or indeed implemented by a military agency, but as we saw during CES 2024 earlier this month, the proliferation of AI into every aspect of technology is becoming unavoidable.

We could therefore see military agencies make use of it, with the recent Israel-Hamas conflict being highlighted as an area of war where the technology of OpenAI and the like could be put to use.

“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told The Intercept.

At the time of writing, it looks like OpenAI is still wanting to ensure its tech is not used in a nefarious way, but when it comes to national security, particular in the United States, things get a little murkier.

“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission,” a spokesperson told Engadget in an official statement.

“For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions,” they concluded.

Given this recent change and its potential implications, it will be interesting to see how regulatiors react, given the AI in general is now under the microscope at a government level.

[Image – Photo by Andrew Neel on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement