A previously confidential directive by Biden administration lawyers lays out how military and spy agencies must handle personal information about Americans when using artificial intelligence, showing how the officials grappled with trade-offs between civil liberties and national security.
The results of that internal debate also underscore the constraints and challenges the government faces in issuing rules that keep pace with rapid advances in technology, particularly in electronic surveillance and related areas of computer-assisted intelligence gathering and analysis.
The administration had to navigate two competing goals, according to a senior administration official, Joshua Geltzer, the top legal adviser to the National Security Council: “harnessing emerging technology to protect Americans, and establishing guardrails for safeguarding Americans’ privacy and other considerations.”
The White House last month held back the four-page, unclassified directive when President Biden signed a major national security memo that pushes military and intelligence agencies to make greater use of A.I. within certain guardrails.
After inquiries from The New York Times, the White House has made the guidance public. A close read and an interview with Mr. Geltzer, who oversaw the deliberations by lawyers from across the executive branch, offers greater clarity on the current rules that national security agencies must follow when experimenting with using A.I.
The answers they reached, the document shows, are preliminary. Because the technology is evolving quickly, national security lawyers for Mr. Biden decided the government must revisit the guidance in six months — a task that will now fall to the Trump administration.
The A.I. systems that private sector companies are developing, like OpenAI’s large language model, Chat GPT, apparently far surpass anything the government can do. As a result, the government is more likely to buy access to an A.I. system rather than create its own. The guidance says that such a system will count as being “acquired” if it is hosted on a government server or if officials have access to it beyond what anyone could do on the internet.
Training A.I. systems requires feeding them large amounts of data, raising a critical question for intelligence agencies that could influence both Americans’ private interests and the ability of national security agencies to experiment with the technology. When an agency acquires an A.I. system trained by a private sector firm using information about Americans, is that considered “collecting” the data of those Americans?
The answer determines whether or when long-existing limits for what a national-security agency can do with personal data about Americans, developed for surveillance programs, kick in.
Rules for what an agency employees can do with domestic information it has collected include limiting when they may retain such data, how they must store it, the date by when they must delete it, under what circumstances their analysts may query it, and when and how the agencies may disseminate it to other parts of the government.
Many of those limits were developed in the context of older technologies like wiretapping phone calls. The Biden legal team, Mr. Geltzer said, worried that applying those privacy rules at the point when A.I. systems are acquired would severely inhibit agencies’ ability to experiment with the new technology.
As a result, the guidance says that when an intelligence agency acquires an artificial intelligence system that was trained using Americans’ data, that does not generally count as collecting the training data — so those existing privacy-protecting rules, along with a 2021 directive about collecting commercially available databases, are not yet triggered.
Still, the Biden team was not absolute on that question. The guidance leaves open the possibility that acquisition might count as collection if the agency has the ability to access the training data in its original form, “as well as the authorization and intent to do so.”
The use of sensitive information in training an A.I. system — especially when it is capable of spitting that data back out in response to a prompt — has raised novel and contested issues on other fronts. The Times and several other news organizations are suing OpenAI and Microsoft over their use of copyrighted news articles to train chatbots.
The Biden team also addressed what it would mean if an agency uses data about Americans already in its possession to modify or augment an A.I. system. That could be fine-tuning the system’s training to change how it weighs certain factors, or connecting it to additional data and tools without altering its underlying processes.
In that case, the document says, longstanding attorney general guidelines about spy agencies’ using, querying, retaining and disseminating Americans’ information kick in — as do laws that can further limit what the government may do with domestic information, like the Privacy Act.
The guidance requires intelligence agencies to consult with senior legal and privacy officers before any such action. And it raises particular caution about feeding an A.I. system with information gathered by the Foreign Intelligence Surveillance Act: Officials are required to consult the Justice Department and the Office of the Director of National Intelligence first.
In the world of national security surveillance, there are rules limiting when an analyst may query a database of raw intercepts in search of information about Americans. The guidance examined a similar issue: when an intelligence official may prompt an A.I. system by asking it a question about an American.
If, in response to such a prompt, an A.I. system spits out information that an intelligence agency did not already have, the guidance says, that counts as collection if the analyst decides to copy, save or use that new information. In that case, the limits on handling Americans’ personal information kick in.
The guidance also encourages intelligence agencies to consider steps that could make oversight efforts easier. But the guidance does not require such precautions.
For example, it tells agencies to explore possible ways to mark information about Americans collected by an A.I. system and any intelligence reports containing that information. And it asks agencies to “consider what documentation, if any, is appropriate” that would log when analysts have submitted a prompt that was designed to return Americans’ information.
The guidance governing personal information about Americans’ personal privacy joins a separate memo released in October that outright bans the use of A.I. in some circumstances, such as by requiring humans to remain in the loop when carrying out a presidential decision to launch or terminate a nuclear strike.
That earlier memo also laid out “high impact” activities that military and intelligence agencies could in theory do with the technology — but only with more intensive safeguards like rigorous risk assessments, testing and human oversight. Those included using A.I. to track people based on biometrics for military or law enforcement action, classifying people as known or suspected terrorists and denying entry to a foreign visa applicant.
“These documents will enable the executive branch to use artificial intelligence more fully and at the same time more responsibly to advance public safety and national security, while also requiring executive branch lawyers to revisit key legal considerations in light of evolving technology and the findings from particular use cases,” Mr. Geltzer said.
The post Spy Agency Memo Sets Rules for Artificial Intelligence and Americans’ Private Data appeared first on New York Times.