Posts

The George Nader Problem: NSA Removes the Child Exploitation Content from Its Servers

When Lebanese-American dual citizen George Nader was stopped at Dulles after arriving on a flight from Dubai on January 17, 2018, he had at least 12 videos on his phone depicting boys as young as two years old being sexually abused, often with the involvement of farm animals. In the days before a Mueller prosecutor obtained the contents of the three phones Nader had with him, Nader sat for at least four interviews with Mueller’s prosecutors and told a story (which may not have been entirely forthright) about how he brokered a meeting in the Seychelles between Russia and Erik Prince a year earlier. Nader exploited Prince’s interest in work with Nader’s own employer — Mohammed bin Zayed — to set up the back channel meeting, and as such was a very effective broker in the service of two foreign countries, one hostile to the US. As such, I assume, Nader became a key counterintelligence interest, on top of whatever evidence he provided implicating Trump and his flunkies.

Mueller’s team got the returns on Nader’s phones back on March 16. An FBI Agent in EDVA in turn got a warrant for the child porn. But two days after the agent got the warrant return, Nader skipped town and remained out of the country until days after Mueller shut down his investigation, at which point he returned to the US and was promptly arrested for his abuse of children. Even without the other influence peddling that Nader had done on behalf of the Emirates, he would have remained a key counterintelligence interest for the entire 14 months he remained outside the country. After all, Nader had been making key connections since at least the time he introduced Ahmed Chalabi to Dick Cheney, and probably going back to the Clinton Administration.

So it is quite possible that for the entire period Nader was out of the country, he was surveilled. If that happened, it almost certainly would have happened with the assistance of NSA. As an agent of Dubai, he would be targetable under FISA, but as a US citizen, targeting him under FISA would require an individualized FISA warrant, and the surveillance overseas would take place under 705b.

If the surveillance did happen, Nader’s sexual abuse of boys would have had foreign intelligence value. It would be of interest, for example, to know who knew of his abuse and whether they used it as leverage over Nader. The source of the videos showing the children being exploited would be of interest. So, too, would any arrangements Nader made to procure the actual boys he abused, particularly if that involved high powered people in Middle Eastern countries.

Understanding how George Nader fit in international efforts to intervene in US affairs would involve understanding his sexual abuse of boys.

And that poses a problem for the NSA, because it means that really horrible content — such as Nader’s videos showing young boys being abused with goats for the object of an adult’s sexual pleasure — is among the things the NSA might need to collect and analyze.

I’ve been thinking about George Nader as I’ve been trying to understand one detail of the recent FISA 702 reauthorization. In January 2020, the NSA got permission to — in the name of lawful oversight — scan its holdings for child exploitation, stuff like videos of adults using goats to sexually abuse very young boys.

In a notice filed on January 22, 2020, the government informed the Court that NSA had developed a method, [redacted] of known or suspected child-exploitation material (including child pornography), to identify and remove such material from NSA systems. To test this methodology, NSA ran the [redacted] against a same of FISA-acquired information in NSA systems. The government concedes that queries conducted for such purposes do not meet generally applicable querying standard; nor do they fall within one of the lawful oversight functions enumerated in the existing NSA querying procedures. Nevertheless, NSD/ODNI opined that “the identification and removal of child exploitation material … from NSA systems that is a lawful oversight function under section IV.C.6,” and that the deviation from the querying procedures was “necessary to perform this lawful oversight function of NSA systems.” Notice of Deviation from Querying Procedures, January 22, 2020, at 3; see Oct. 19, 2020, Memorandum at 10.

NSA anticipates using such queries going forward, likely on a recurring basis, to proactively identify and remove child-exploitation material from its systems. The government submits that doing so is necessary to “prevent [NSA] personnel from unneeded exposure to highly disturbing, illegal material.” October 19, 2020, Memorandum at 10. The Court credits this suggestion and likewise finds that performance of these queries qualifies as a lawful oversight function for NSA systems. But the Court encouraged the government to memorialize this oversight activity in § IV.C.6, among the other enumerated lawful oversight functions that are recognized exceptions to the generally acceptable querying standards.

The government has done so. Section IV.C.6 now includes a new provision for “identify[ing] and remov[ing] child exploitation material, including child pornography, from NSA systems.” NSA Querying Procedures § IV.C.6.f. The Court finds that the addition of this narrow exception has no material impact on the sufficiency of the querying procedures taken as a whole.

At first, I thought they were doing this to protect the children. Indeed, my initial concern was that NSA was using these scans to expand the use of NSA queries for what wound up being law enforcement action, such that they could ask to do similar scans for the seven other crimes they’ve authorized sharing FISA data on (though of the other crimes, only snuff videos would be as easy to automate as child porn, which has a well-developed technology thanks to Facebook and Google). I thought that, once they scanned their holdings, they would alert whatever authority might be able to rescue the children involved that they had been victimized. After all, under all existing minimization procedures, the NSA can share proof of a crime with the FBI or other relevant law enforcement agency. Indeed, in 2017, FISC even authorized NSA and FBI to share such evidence of child exploitation with the National Center for Missing and Exploited Children, so they could attempt to identify the victims, help bring the perpetrators to justice, and track more instances of such abuse.

But that doesn’t appear to be what’s happening.

Indeed, as described, “saving the victims” is not the purpose of these scans. Rather, preventing NSA personnel from having to look at George Nader’s pictures showing goats sexually abusing small boys is the goal. When I asked the government about this, NSA’s Director for Civil Liberties, Privacy and Transparency, Rebecca Richards, distinguished finding child exploitation material in the course of intelligence analysis — in which case it’ll get reported as a crime — from this, which just removes the content.

NSA does not query collected foreign intelligence information to identify individuals who may be in possession of child exploitation material. This particular provision allows NSA to identify and remove known or suspected child-exploitation material (including child pornography) from NSA systems.

The Court agreed that this was appropriate lawful oversight to “prevent [NSA] personnel from unneeded exposure to highly distributing, illegal material.” The point of the query is not to surface the material for foreign intelligence analysis, the function of the query is to remove the material. If NSA finds such information in the course of its analytic process to identify and report on foreign intelligence, it will review and follow necessary crimes reporting.

The Court credits the suggestion to conduct this activity as part of NSA’s lawful oversight function. [my emphasis]

I asked NSA a bunch of other questions about this, but got no further response.

First, isn’t the NSA required to (and permitted to, under the minimization procedures) alert the FBI to all such instances they find? So wouldn’t this be no different from a law enforcement search, since if found it will lead to the FBI finding out about it?

Second, as offensive as this stuff is, isn’t it also of value from a foreign intelligence perspective? Ignoring that George Nader is a US person, if a high profile advisor to MbZ was known to exploit boys, wouldn’t that be of interest in explaining his position in MbZ’s court and his preference for living in Dubai instead of VA? Wouldn’t it be of interest in understanding the counterintelligence threat he posed?

If it is of FI interest (I seem to recall a Snowden revelation where similar discoveries were used against a extremist cleric, for example), then how is it recorded to capture the FI use before it is destroyed? And in recording it, aren’t there NSA and/or FBI personnel who would have to look more closely at it? Wouldn’t that increase the amount of child exploitation viewed (presumably with the benefit of finding more predators, even if they are outside US LE reach)?

Finally, can you tell me whether NCMEC is involved in this? Do they receive copies of the material for their databases?

Are you saying that if the NSA finds evidence of child exploitation via these searches, it does not refer the evidence to FBI, even if it implicates victims in the United States?

Another question I have given Richards’ response is, why would NSA personnel be accessing collections that happen to include child exploitation except for analytic purposes?

But maybe that’s the real answer here: NSA employees would access child exploitation 1) for analytical purposes (in which case, per Richards, it would get reported as a crime) or 2) inappropriately, perhaps after learning of its presence via accessing it for analytic purposes (something that is not inconsistent with claims Edward Snowden has made).

After all, there have been two really high profile examples of national security personnel accused of critical leaks in the last decade who also have been accused of possessing child pornography: Donald Sachtleben, who after he was busted for (amazingly) bringing child porn on his laptop into Quantico, he later became the scapegoat for a high profile leak about Yemen, and Joshua Schulte, on whose computer the government claims to have found child porn on when it searched the computer for evidence that he stole all of CIA’s hacking tools.

So perhaps the NSA is just removing evidence of child exploitation from its servers — which it spent a lot of resources to collect as foreign intelligence — to avoid tempting NSA employees from accessing it and further victimizing the children?

If that’s correct, then it seems that NSA has taken a totally backwards approach to mitigating this risk.

If you’re going to scan all of NSA’s holdings to ID child exploitation, why not do so on intake, and once found, hash and encrypt it immediately. Some of what analysts would be interested in — tracking the dissemination of known child porn or the trafficking of known victims by transnational organized crime, for example — could be done without ever viewing it, solely after those existing hashes. If there were some other need — such as identifying a previously unidentified victim — then the file in question can be decrypted as it is sent along to FBI. That would have the added benefit of ensuring that if NSA personnel were choosing to expose themselves to George Nader’s videos of young boys being abused with farm animals, then the NSA would have a record of who was doing so, so they could be fired.

I get why the NSA doesn’t want to host the world’s biggest collection of child abuse, particularly given its difficulties in securing its systems. I don’t have any answers as to why they’re using this approach to purge their systems.

NSA Privacy Officer Rebecca Richards Explains What Connection Chaining Is!

Update: I checked with the FBI on whether they were going to do a similar privacy report. After checking around, a spokesperson said, “We are not aware of our folks preparing any such similar public report.”

You’ll recall that for the year and a half that Congress was percolating over USA Freedom Act, I was trying to figure out what “connection chaining” was, but no one knew or would say?

The description of phone dragnet hops as “connections” rather than calls showed up in early versions of the bill and in dragnet orders since 2014. Ultimately, the final bill used language to describe hops that was even less explanatory, as all it requires is a session identifier connection (which could include things like cookies), without any call or text exchanged.

(iii) provide that the Government may require the prompt production of a first set of call detail records using the specific selection term that satisfies the standard required under subsection (b)(2)(C)(ii);

(iv) provide that the Government may require the prompt production of a second set of call detail records using session-identifying information or a telephone calling card number identified by the specific selection term used to produce call detail records under clause (iii);

In documents released yesterday, NSA’s Privacy Officer Rebecca Richards has offered the first explanation of what that chaining process looks like. NSA’s Civil Liberties and Privacy Office released a privacy report and minimization procedures on USAF.

Curiously, the privacy report doesn’t describe two hops of provider data, though that’s meaningless, as the queries will automatically repeat “periodically” (described as daily in the bill), so the government would obtain a second hop from providers by the second day at the latest. Rather, it describes a first hop as occurring within NSA’s Enterprise Architecture, and the results of that query to be sent to providers for a second hop.

Collection: The FISC-approved specific selection term, along with any one-hop results generated from metadata NSA already lawfully possesses from previous results returned from the provider(s) and other authorities, will be submitted to the authorized provider(s). The provider(s) will return CDRs that are responsive to the request, meaning the results will consist of CDRs that are within one or two hops of a FISC-approved specific selection term. This step will be repeated periodically for the duration of the order to capture any new, responsive CDRs  but in no case will the procedures generate third or further hops from a FISC-approved specific selection term.

Here’s the key part of the picture included to describe the NSA hop that precedes the provider hop.

Screen Shot 2016-01-15 at 10.33.16 AM

The report is laudable for its very existence (I’m pestering FBI to see if we’ll get one from them) and for its willingness to use real NSA terms like “Enterprise Architecture.” It is coy in other ways, such as the full role of the FBI, the type of records queried, and — especially — the type of providers included; for the latter, the report cites page 17 of the House report, which only describes providers in this paragraph, using terms — phone company and telecommunications carrier — that are ambiguous and undefined (though someone like Apple could launch a nice lawsuit on the latter term, especially given that they are refusing to provide a back door in a case in EDNY based on the claim they’re not a carrier).

The government may require the production of up to two ‘‘hops’’—i.e., the call detail records associated with the initial seed telephone number and call detail records (CDRs) associated with the CDRs identified in an initial ‘‘hop.’’ Subparagraph (F)(iii) provides that the government can obtain the first set of CDRs using the specific selection term approved by the FISC. In addition, the government can use the FISC-approved specific selection term to identify CDRs from metadata it already lawfully possesses. Together, the CDRs produced by the phone companies and those identified independently by the government constitute the first ‘‘hop.’’ Under subparagraph (F)(iv), the government can then present session identifying information or calling card numbers (which are components of a CDR, as defined in section 107) identified in the first ‘‘hop’’ CDRs to phone companies to serve as the basis for companies to return the second ‘‘hop’’ of CDRs. As with the first ‘‘hop,’’ a second ‘‘hop’’ cannot be based on, nor return, cell site or GPS location information. It also does not include an individual listed in a telephone contact list, or on a personal device that uses the same wireless router as the seed, or that has similar calling patterns as the seed. Nor does it exist merely because a personal device has been in the proximity of another personal device. These types of information are not maintained by telecommunications carriers in the normal course of business and, regardless, are prohibited under the definition of ‘‘call detail records.’’ [my emphasis]

That said, we know the term provider must be understood fairly broadly given the expanded number of providers who will be included in this program.

What this means, in effect, is that NSA and FBI (the latter does the actual application) will get a specific identifier — which could be a phone number, a SIM card number, a handset identifier, or a credit card [correction: this should be “calling card”], among other things — approved at the FISC, then go back to at least NSA’s data (and quite possibly FBI’s), and find all the contacts with something deemed to “be” that identifier that would be meaningful for a “phone company” to query their own records with, up to and including a cookie (which is, by definition, a session identifier).

Even in the report’s description of this process, there’s some slippage in the NSA query step, from an initial RAS approved phone number (202) 555-1234 to an NSA identified number from the (202) area code not provided, making an additional call.

To illustrate the process, assume an NSA intelligence analyst identifies or learns that phone number (202) 555-1234 is being used by a suspected international terrorist. This is the “specific selection term” or “selector” that will be submitted to the FISC (or the Attorney General in an emergency) for approval using the RAS standard. Also assume that, through NSA’s examination of metadata produced by the provider(s) or in NSA’s possession as a result of the Agency’s otherwise lawfully permitted signals intelligence activities (e.g., activities conducted pursuant to Section 1.7(c)(1) of Executive Order 12333, as amended), NSA determines that the suspected terrorist has used a 202 area code phone number to call (301) 555-4321. The phone number with the 301 area code is a “first-hop” result. In turn, assume that further analysis or production from the provider(s) reveals (301) 555-4321 was used to call (410) 555-5678. The number with the 410 area code is a “second-hop” result.

And in this part of the report, the provider query will return any session identifier that includes the selection terms (though elsewhere the report implies only contacts will be returned).

Once the one-hop results are retrieved from the NSA’s internal holdings, the list of FISC-approved specific selection terms, along with NSA’s internal one-hop results, are submitted to the provider(s). The provider(s) respond to the request based on the data within their holdings with CDRs that contain FISC-approved specific selection terms or the one-hop selection term. One-hop returns from providers are placed in NSA’s holdings and become part of subsequent query requests, which are executed on a periodic basis.

Described in this way, the query process sounds a lot more like what the version of the bill I dubbed USA Freedumber authorized than what the language of USA F-ReDux authorized: two steps of provider queries based off the connected selectors identified at NSA.

(iii) provide that the Government  may require the prompt production of call  detail records—

(I) using the specific selection term that satisfies the standard required under subsection (b)(2)(C)(ii)  as the basis for production; and

(II) using call detail records with a direct connection to such specific selection term as the basis for production of a second set of call detail records;

Given the breathtaking variety of selector types the NSA uses, this could represent a great deal of queries on the provider side, many tracking user activity rather than user communications. And, at least given how the privacy report describes the transparency reporting, neither those interim NSA selectors nor cookies showing user activity but not communication of information would get counted in transparency reports.

The number of targets under each order: Defined as the person using the selector. For example, if a target has a set of four selectors that have been approved, NSA will count one target, not four. Alternatively, if two targets are using one selector that has been approved, NSA will count two targets.

The number of unique identifiers used to communicate information collected pursuant to an order: Defined as each unique record sent back from the provider(s).

This approach seems to solve a problem the NSA appears to have been having since 2009, how to query entirely domestic records with identifiers that have been algorithmically determined to be used by the same person. Here, the NSA will be able to match connected selectors to an approved one, and then send all of them to providers to obtain entirely domestic records.

But if I’m right in my reading of this, it leaves one hole in the privacy analysis of the this report.

Richards measures USAF, as she has other programs, against the Fair Information Practice Principles, which include a measure of Data Quality and Integrity. But the report’s analysis of that in this program completely ignores how central NSA’s own data is in the process.

Each CDR is a business record generated by a provider for the provider͛’s own business use. NSA plays no role in ensuring that the provider-generated CDRs accurately reflect the calling events that occurred over the provider’s infrastructure, but the provider(s) have their own policies, practices, and incentives for ensuring the accuracy of their records͘. NSA’s requirements for ensuring accurate, relevant, timely, and complete CDRs begin when NSA submits query requests to the provider(s), and the provider(s), in response, produce CDRs to the Agency.

At least given the description laid out throughout this report, that’s entirely wrong! NSA is centrally involved in getting from the initial selector to the selectors submitted to the providers for query. So if the NSA’s analysis, which as described may include algorithmic matching of records, is inaccurate (say, by matching burner phones inaccurately), than the provider query will return the phone and other records of completely unassociated individuals. I can’t see any way that the NSA’s own query can be exempted from accuracy review here, but it has been.

I absolutely assume NSA is confident in its analysis, but to just dismiss it as uninvolved when it precedes the provider query ignores the implementation architecture laid out in this report.

In any case, I’m grateful we’ve got this report (I may have more to say on the minimization procedures, but they, like the report, are far clearer than the ones included in the old dragnet and for Section 702, perhaps because of the involvement of a Privacy Officer). I’m still thinking through the privacy implications of this. But really, this querying process should have been revealed from the start.

NSA’s Privacy Officer Exempts Majority of NSA Spying from Her Report on EO 12333 Collection

NSA’s Director of Civil Liberties and Privacy, Rebecca Richards, has another report out, this time on “Civil Liberties and Privacy Protections” provided in the Agency’s EO 12333 programs. As with her previous report on Section 702, this one is almost useless from a reporting standpoint.

The reason why it is so useless is worth noting, however.

Richards describes the scope of her report this way:

This report examines (1) NSA’s Management Activities that are generally applied throughout the Agency and (2) Mission Safeguards within the SIGINT mission when specifically conducting targeted3 SIGINT activities under E.O. 12333.

3 In the context of this paper, the phrase “targeted SIGINT activities” does not include “bulk” collection as defined in Presidential Policy Directive (PPD)-28. Footnote 5 states, in part, “References to signals intelligence collected in ‘bulk’ mean the authorized collection of large quantities of signals intelligence data which, due to technical or operational considerations, is acquired without the use of discriminants (e.g., specific identifiers, selection terms, etc.).”

Richards neglects to mention the most important details from PPD-28 on bulk collection: when collection in “bulk” is permitted.

Locating new or emerging threats and other vital national security information is difficult, as such information is often hidden within the large and complex system of modern global communications. The United States must consequently collect signals intelligence in bulk5 in certain circumstances in order to identify these threats. Routine communications and communications of national security interest increasingly transit the same networks, however, and the collection of signals intelligence in bulk may consequently result in the collection of information about persons whose activities are not of foreign intelligence or counterintelligence value. The United States will therefore impose new limits on its use of signals intelligence collected in bulk. These limits are intended to protect the privacy and civil liberties of all persons, whatever their nationality and regardless of where they might reside.

In particular, when the United States collects nonpublicly available signals intelligence in bulk, it shall use that data only for the purposes of detecting and countering: (1) espionage and other threats and activities directed by foreign powers or their intelligence services against the United States and its interests; (2) threats to the United States and its interests from terrorism; (3) threats to the United States and its interests from the development, possession, proliferation, or use of weapons of mass destruction; (4) cybersecurity threats; (5) threats to U.S. or allied Armed Forces or other U.S or allied personnel; and (6) transnational criminal threats, including illicit finance and sanctions evasion related to the other purposes named in this section. In no event may signals intelligence collected in bulk be used for the purpose of suppressing or burdening criticism or dissent; disadvantaging persons based on their ethnicity, race, gender, sexual orientation, or religion; affording a competitive advantage to U.S. companies and U.S. business sectors commercially; or achieving any purpose other than those identified in this section.

5 The limitations contained in this section do not apply to signals intelligence data that is temporarily acquired to facilitate targeted collection. References to signals intelligence collected in “bulk” mean the authorized collection of large quantities of signals intelligence data which, due to technical or operational considerations, is acquired without the use of discriminants (e.g., specific identifiers, selection terms, etc.).

The NSA collects in “bulk” (that is, “everything”), temporarily, to facilitate targeted collection. This refers to the 3-5 day retention of all content and 30 day retention of all metadata from some switches so XKeyscore can sort through it to figure out what to keep.

And the NSA also collects in “bulk” (that is, “everything”) to hunt for the following kinds of targets:

  • Spies
  • Terrorists
  • Weapons proliferators
  • Hackers and other cybersecurity threats
  • Threats to armed forces
  • Transnational criminals (which includes drug cartels as well as other organized crime)

Of course, when NSA collects in “bulk” (that is, “everything”) to hunt these targets, it also collects on completely innocent people because, well, it has collected everything.

So at the start of a 17-page report on how many “civil liberties and privacy protections” the NSA uses with its EO 12333 collection, NSA’s Privacy Officer starts by saying what she’s about to report doesn’t apply to NSA’s temporary collection of  everything to sort through it, nor does it apply to its more permanent collection of everything to hunt for spies, terrorists, weapons proliferators, hackers, and drug bosses.

That is, the “civil liberties and privacy protections” Richards describe don’t apply to the the great majority of what NSA does. And these “civil liberties and privacy protections” don’t apply until after NSA has collected everything and decided, over the course of 5 days, whether it wants to keep it and in some places, kept everything to be able to hunt a range of targets.

This actually shows up in Richards’ report, subtly, at times, as when she emphasizes that her entire “ACQUIRE” explanation focuses on “targeted SIGINT collection.” What that means, of course, is that process, where collecting only takes place after an NSA analyst has targeted the collection? It doesn’t happen in the majority of cases.

Once you collect and sort through everything, does it really make sense to claim you’re providing civil liberties and privacy protections?

NSA’s New “Privacy Officer” Releases Her First Propaganda

Over at Lawfare, Ken Anderson released the public comment on Section 702 the NSA Civil Liberties and Privacy Office have submitted to the Privacy and Civil Liberties and Oversight Board. Anderson notes that the comment doesn’t appear to be online yet, and the name of the Civil Liberties and Privacy Officer, Rebecca Richards, doesn’t appear on what Anderson posted (though that may be Lawfare’s doing).

The statement, generally, makes me sad. The comment repeatedly backed off including known, even unclassified details about Section 702, and as such this doesn’t so much read as an independent statement on the privacy assessment of the woman at the NSA mandated with overseeing it, but rather a highly scripted press release.

I will probably do a piece on some potential holes this statement may indicate in NSA’s oversight (though it is written in such hopeless bureaucratese, we can’t be sure). But for the moment, I wanted to point to what, in my opinion, is the most glaring example of how scripted this.

The statement describes back door searches this way:

Since October 2011 and consistent with other agencies’ Section 702 minimization procedures, NSA’s Section 702 minimization procedures have permitted NSA personnel to use U.S. person identifiers to query Section 702 collection when such a query is reasonably likely to return foreign intelligence information. NSA distinguishes between queries of communications content and communications metadata. NSA analysts must provide justification and receive additional approval before a content query using a U.S. person identifier can occur. To date, NSA analysts have queried Section 702 content with U.S. person identifiers less frequently than Section 702 metadata. For example, NSA may seek to query a U.S. person identifier when there is an imminent threat to life, such as a hostage situation. NSA is required to maintain records of U.S. person queries and the records are available for review by both OOJ [sic] and ODNI as part of the external oversight process for this authority. Additionally, NSA’s procedures prohibit NSA from querying Upstream data with U.S. person identifiers.

The only new piece of information provided here is that the NSA conducts more back door searches on 702 metadata than on 702 content.

But then the statement immediately provides the most defensible example of back door searches — searching for a US person’s identifier in content when they’ve been kidnapped, a scenario that derives from a pre-PAA problem with NSA’s kludged FISC approved program. Notably, this scenario is almost certainly not a metadata search! This is also the same scenario used by Dianne Feinstein’s aides in November to obscure the true extent of the searches, suggesting it is a propaganda line NSA has developed to spin back door searches.

What I find so frustrating about this statement is how it compares with statements others have already made … to PCLOB.

In November, for example, after ODNI General Counsel Robert Litt admitted that the Intelligence Community treats back door searches of 702 data (and probably, EO 12333 data) like they do all “legally collected” data, NSA General Counsel Raj De admitted that NSA doesn’t even require Reasonable Articulable Suspicion to do searches on US person data, because doing so would involve adopting a higher standard for back door searches than for other data.

Raj De: Our minimization procedures, including how we handle data, whether that’s collection, analysis, dissemination, querying are all approved by the Foreign Intelligence Surveillance Court. There are protections on the dissemination of information, whether as a result of a query or analysis. So in other words, U.S. person information can only be disseminated if it’s either necessary to understand the foreign intelligence value of the information,evidence of a crime and so forth. So I think those are the types of protections that are in place with this lawfully collected data.

[Center for Democracy and Technology VP James] DEMPSEY: But am I right, there’s no, on the query itself, other than it be for a foreign intelligence purpose, is there any other limitation? We don’t even have a RAS for that data.

MR. DE: There’s certainly no other program for which the RAS standard is applicable. That’s limited to the 215 program, that’s correct. But as to whether there is, and I think this was getting to the probable cause standard, should there be a higher standard for querying lawfully collected data. I think that would be a novel approach in this context, not to suggest reasonable people can’t disagree, discuss that. But I’m not aware of another context in which there is lawfully collected, minimized information in this capacity in which you would need a particular standard.

Then, in March, Litt objected to requiring court review before doing back door searches (and he was asked specifically about back door searches of US person data, though he reportedly tried to back off the application of this to US persons after the hearing) because the volume of back door searches is so high.

[Retired DC Circuit Judge] Patricia Wald: The President required, or, I think he required in his January directive that went to 215 that at least temporarily, the selectors in 215 for questioning the databank of US telephone calls–metadata–had to be approved by the FISA Court. Why wouldn’t a similar requirement for 702 be appropriate in the case where US person indicators are used to search the PRISM database? What big difference do you see there?

Robert Litt: Well, I think from a theoretical perspective it’s the difference between a bulk collection and a targeted collection which is that–

Wald: But I would think that, sorry for interrupting, [cross-chatter]  I would think that message since 702 has actually got the content.

Litt: Well, and the second point that I was going to make is that I think the operational burden in the context of 702 would far greater than in the context of 215.

Wald: But that would–

Litt: If you recall, the number of actual telephone numbers as to which a  RAS–reasonable articulable suspicion determination was made under Section 215 was very small. The number of times that we query the 702 database for information is considerably larger. I suspect that the Foreign Intelligence Surveillance Court would be extremely unhappy if they were required to approve every such query.

Wald: I suppose the ultimate question for us is whether or not the inconvenience to the agencies or even the unhappiness of the FISA Court would be the ultimate criteria.

Litt: Well I think it’s more than a question of convenience, I think it’s also a question of practicability.

Admittedly, Litt’s answer refers to all the back door searches conducted by the Intelligence Community, including the both the CIA and FBI (the latter of which other reporters seem to always ignore when discussing back door searches), as well as NSA. So it’s possible this volume of back door searches reflects FBI’s use of the practice, not NSA’s. (Recall that former presiding FISC Judge John Bates admits the Court has no clue how often or in what ways the Executive Branch is doing back door searches on US person data, but that it is likely so common as to be burdensome to require FISC involvement.)

Still, the combined picture already provided to PCLOB goes well beyond the hostage situation provided by the Privacy Office statement.

Even the President’s comment about back door searches in his January speech appears to go beyond what the NSA statement does (though again, imposing new limits on back door searches for law enforcement purposes probably speaks primarily to FBI’s back door searches, less so NSA’s).

 I am asking the Attorney General and DNI to institute reforms that place additional restrictions on government’s ability to retain, search, and use in criminal cases, communications between Americans and foreign citizens incidentally collected under Section 702.

We are slowly squeezing details about the reality of back door searches, so I wasn’t really relying on this statement in any case.

But it’s an issue of credibility. The Privacy Officer, to have a shred of credibility and therefore the PR value that Obama surely hopes it will have, must appear to be speaking from independent review within the scope permitted by classification restraints. That hasn’t happened here, not even close. Instead, Rebecca Richards appears to speaking under the constraint of censorship far beyond that imposed on other government witnesses on this issue.

That doesn’t bode well for her ability to make much difference at NSA.