U.S. intelligence news: How AI is linked to the CIA

U.S. Intelligence and Artificial Intelligence: What Binds Them Together

American intelligence services are having a hard time accepting the AI revolution. They believe that if they don't, they will be crushed by the exponential growth of data as sensor surveillance technology increasingly covers the planet. But government officials realize the technology is still young and vulnerable, and generative AI (predictive models trained on huge data sets to generate text, images, video and human-like conversations on demand) is not suited for a dangerous job where fraud is rampant.

Analysts need "sophisticated artificial intelligence models capable of assimilating large amounts of information from open sources and covertly collected information," CIA Director William Burns recently wrote in Foreign Affairs. But it's not easy".

Nando Marchandani, the CIA's chief engineer, believes that artificial intelligence models "cause hallucinations" and are therefore best treated as "crazy drunken friends". There are also security and privacy concerns. Enemies can steal or poison them, and they may contain sensitive personal data that employees are not allowed to see. Even so, the experiments are mostly clandestine.

Osiris

Thousands of analysts from 18 U.S. intelligence agencies use a CIA-developed genetic artificial intelligence called Osiris. Osiris works with unclassified, publicly available or commercially available data, so-called open source. Osiris creates annotated summaries, and a chatbot feature allows analysts to drill down into queries.

What is Osiris AI?

Marchandani said the system uses artificial intelligence models from various commercial vendors, but he declined to name them. He also did not say whether the CIA uses artificial intelligence in its secret networks. Marchandani said: "This is still in the early stages and our analysts need to determine with absolute certainty where the information is coming from". According to Marchandani, the CIA has tested all the major artificial intelligence models but has not settled on any of them. In his opinion, genetic AI is good as a virtual assistant looking for "one needle in a stack of needles". But it will never replace a human analyst, officials insist.

The most powerful intelligent application

According to Anshu Roy, CEO of Lombus Power, the most powerful intelligent application will be predictive analytics. It will be one of the biggest paradigm shifts in the entire field of national security: the ability to predict adversary behavior. Lombus' artificial intelligence is based on more than 5,000 data streams in 250 languages collected over a decade, including data from global news sources, satellite imagery, and cyberspace. All of this data is open source. We can track people, we can track objects," says Roy.

Top Secret Networks

On May 7, Microsoft announced that it will offer OpenAI GPT-4 for its top-secret network. Competing company Primer AI counts two unnamed intelligence agencies, including the military services, as customers, according to documents published online about a recent military AI workshop. The company offers AI-assisted search in 100 languages to "detect new alerts about emergency events" from sources such as Twitter, Telegram, Reddit and Discord, and to identify "key people, organizations and places". Primer claims, among other things, that its technology Citing targeting, during a demonstration at a military conference days after Hamas attacked Israel on Oct. 7, a company executive explained how its technology separates fiction from fact in the online flow of information from the Middle East.

What does Primer AI do?

Risks of implementing AI

In the short term, how U.S. intelligence professionals use genetic artificial intelligence may not be as important as countering how adversaries use it. While Silicon Valley is working with the technology, the White House fears that AI models used by U.S. authorities will be deployed and poisoned. Another concern is ensuring the privacy of "Americans" whose data can be embedded in big language models.

Ask any researcher or developer training a large language model if you can remove one piece of information from a large language model, make it forget it, and get reliable empirical guarantees that it has forgotten it - it's impossible," John Baylor, head of artificial intelligence at the Office of the Director of National Intelligence, said in an interview. That's one reason why intelligence agencies aren't trying to disrupt things quickly when it comes to AI deployment. According to Baylor, "We don't want a world where we deploy all of these things so quickly that two or three years later information, effects, and new behaviors are discovered that we didn't anticipate". For example, if a government agency decides to use AI to research bioweapons or cyberweapons technology, that would be a problem.

William Hartung, a senior fellow at the Quincy Institute for Responsible Government, says intelligence agencies need to be mindful of unintended consequences, such as illegal surveillance and increased civilian casualties in conflicts. To avoid unintended consequences, AI must be carefully evaluated for potential misuse. This comes amid repeated military and intelligence claims of "wonder weapons" and revolutionary approaches that have been shown not to have worked - from electronic battlefields in Vietnam to the Star Wars programs of the 1980s and the "military revolution" of the 1990s and 2000s.  Government officials say they understand these concerns. What's more, AI tasks vary widely across government agencies. There is no one-size-fits-all solution.

Request for a New Type of AI

In December, NGA released a request for proposals to create an entirely new type of generative AI model. The goal is to gather accurate geospatial information through simple voice or text messages, using imagery from satellites and the ground. Generative models don't map roads or railroads and "don't understand the basics of geography," Mark Mansell, NGA's director of innovation, said in an interview. Mansell said in an interview.

At a conference in Arlington, Virginia, in April, Mansell said the U.S. government currently models and maps only about 3 percent of the globe. The use of AI makes sense in cyber conflicts, where attackers and defenders are in a constant struggle and automation is fluid. But many important intelligence operations have nothing to do with data science, says Zachary Tyson Brown, a former defense intelligence official. He believes that if intelligence agencies deploy genetic EU too quickly or too fully, there will be problems. The models don't draw conclusions. They only make predictions. And developers can't explain well how it works. Ultimately, it's a poor tool for competing against rivals who are masters of deception.

Brown recently wrote in an internal CIA journal, "Intelligence analysis is like the old adage about putting together a jigsaw puzzle, but someone else is always trying to steal your pieces and put completely different pieces in the pile you're working on". Analysts work with "incomplete, ambiguous, often contradictory, fragmented and unreliable information". They rely heavily on intuition, colleagues and organizational memory. "I don't think AI will replace analysts anytime soon," says Weisgold, the CIA's former deputy director of analysis. There are times when life-or-death decisions have to be made based on incomplete data, and the current AI model is still too opaque.

Review

leave feedback