As I noted in an update to this post, over the last several months, the Brennan Center has led an effort among privacy organizations to get the Intelligence Community to provide the transparency over its Section 702 surveillance that it dodged under the USA Freedom Act. On October 29, 2015, it send James Clapper a letter asking for:
On December 23, Privacy Officer Alex Joel responded on behalf of Clapper, largely dodging the requests but offering to have a meeting at which he could further dodge the request. Then yesterday, Brennan replied, calling out some of those dodges and posing new questions in advance of any meeting.
While the reply asks some worthwhile new questions, I wanted to look at some underlying background to the response Joel and ODNI gave.
In response to Brennan’s request for the number of US persons sucked up in 702, Joel points back to the PCLOB 702 report (which was far more cautious than the earlier 215 report) and its report on the status of recommendations from January 2015 and basically says, “we’re still working on that.” Brennan deemed the response non-responsive and noted that the IC is still working on 4 of PCLOB’s 5 recommendations 18 months after they issued it.
I would add one important caveat to that: PCLOB’s fifth recommendation was that the government provide,
the number of instances in which the NSA disseminates non-public information about U.S. persons, specifically distinguishing disseminations that includes names, titles, or other identifiers potentially associated with individuals.
We’ve just learned — through curiously timed ODNI declassification — that the numbers FBI gives to Congress on 702 dissemination are dodgy, or at least were dodgy in 2012, in part because they had been interpreting what constituted US person information very narrowly. For whatever reason, PCLOB didn’t include FBI in this recommendation, but they should be included, especially given the issues of notice to defendants dealt with below.
More importantly, there’s something to remember, as the IC dawdles in its response to this recommendation. In 2010, John Bates issued a ruling stating that knowingly collecting US person content constituted an illegal wiretap under 50 USC 1809(a). Importantly, he said that if the government didn’t know it was conducting electronic surveillance, that was okay, but it shouldn’t go out of its way to remain ignorant that it was doing so.
When it is not known, and there is no reason to know, that a piece of information was acquired through electronic surveillance that was not authorized by the Court’s prior orders, the information is not subject to the criminal prohibition in Section 1809(a)(2). Of course, government officials may not avoid the strictures of Section 1809(a)(2) by cultivating a state of deliberate ignorance when reasonable inquiry would likely establish that information was indeed obtained through unauthorized electronic surveillance.
The following year, Bates held that when it collected entirely domestic communications via upstream Section 702 collection, that collection was intentional (and therefore electronic surveillance), not incidental, though Clapper’s lawyer Bob Litt likes to obfuscate on this point. The important takeaway, though, is that the IC can illegally collect US person data so long as it avoids getting affirmative knowledge it is doing so, but it can’t be too obvious in its efforts to remain deliberately ignorant.
I’d say 18 months begins to look like willful ignorance.
Brennan asked for solid numbers on back door searches, and Joel pointed to PCLOB’s recommendations that pertain to updated minimization procedures, a totally different topic.
And even there Joel was disingenuous in a way that the Brennan letter did not note.
Joel asserts that “with the recent reauthorization of the 702 Certification … this recommendation 2 [has] been implemented.” The recommendation included both additional clarity in FBI’s minimization procedures as well as further limits on what non-national security crimes FBI can use 702 data for.
Back in February 2015, Bob Litt revealed the latter information, what FBI could use 702 data for:
crimes involving death, kidnapping, substantial bodily harm, conduct that is a specified offense against a minor as defined in a particular statute, incapacitation or destruction of critical infrastructure, cyber security, transnational crimes, or human trafficking.
But after Litt made that disclosure, and either after or during the process of negotiating new 702 certificates, the ODNI released updated minimization procedures. But they where the MPs for 2014, not 2015! (See this post for a discussion of new disclosures in those documents.) Joel’s answer makes clear that FBI’s minimization procedures were updated significantly in the 2015 application beyond what they had been in 2014 (because that’s the only way they could have not fulfilled that recommendation last January but have since done so).
In other words, Joel answers Brennan’s question by boasting about fulfilling PCLOB’s recommendations, but not Brennan’s answer. But even there, if ODNI had just released the current FBI MPs, rather than year-old ones, part of Brennan’s questions would be answered — that is, what the current practice is.
I think the recent new disclosures about the limits on FBI’s very limited disclosure reporting (at least until 2012) provide some additional explanation for why FBI doesn’t count its back door searches. We know:
In other words, there is a great deal of room to launder where data comes from, particularly if it has been used for metadata link analysis as an interim step. To try to count the specifically Section 702 queries, even just of content, though all the more so of metadata, would require revealing these overlaps, which FBI surely doesn’t want to do.
All that’s also background to Brennan’s request for information about notice to defendants. Joel pretty much repeated DOJ’s unhelpful line, though he did direct Brennan to this OLC memo on notice to those who lose clearance. Not only does that memo reserve the right to deem something otherwise subject to FISA’s notice requirements privileged, it also cites from a 1978 House report excluding those mentioned in, but not a party to, electronic surveillance from notice.
[A]s explained in a FISA House Report, “[t]he term specifically does not include persons, not parties to a communication, who may be mentioned or talked about by others.”
That, of course, coincides with one of the categories of people that it appears FBI was not counting in FISA dissemination reports until at least 2012 (and, of course, metadata does not count as electronic surveillance).
All of which is to say this appears to hint at the scope of how FBI has collected and identified people using 702 derived data that nevertheless don’t get 702 notice.
None of that excuses ODNI for refusing to respond to these obvious questions. But it does seem to indicate that the heart of FBI’s silence about its own 702 practices has a lot to do with its ability to arbitrage the multiple authorities it uses to spy.
This is going to be a weedy post in which I look at a key detail revealed by 2010 NSA Inspector General reviews of the Section 215 phone dragnet. The document was liberated by Charlie Savage last year.
At issue is the government’s description, in the period after the Snowden leaks, of what kind of searches it did on the Section 215 phone dragnet. The searches the government did on Section 215 dragnet data are critical to understanding a number of things: the reasons the parallel Internet dragnet probably got shut down in 2011, the squeals from people like Marco Rubio about things the government lost in shutting down the dragnet, and the likely scope of collection under USA Freedom Act.
Throughout the discussion of the phone dragnet, the administration claimed it was used for “contact chaining” — that is, exclusively to show who was within 3 (and starting in 2014, 2) degrees of separation, by phone calls [or texts, see update] made, from a suspected terrorist associate.
Here’s how the administration’s white paper on the program described it in 2013.
This telephony metadata is important to the Government because, by analyzing it, the Government can determine whether known or suspected terrorist operatives have been in contact with other persons who may be engaged in terrorist activities, including persons and activities within the United States. The program is carefully limited to this purpose: it is not lawful for anyone to query the bulk telephony metadata for any purpose other than counterterrorism, and Court-imposed rules strictly limit all such queries.
Though some claims to Congress and the press were even more definitive that this was just about contact chaining.
The documents on the 2009 violations released under FOIA made it clear that, historically at least, querying wasn’t limited to contact chaining. Almost every reference in these documents to the scope of the program includes a redaction after “contact chaining” in the description of the allowable queries. Here’s one of many from the government’s first response to Reggie Walton’s questions about the program.
The redaction is probably something like “pattern analysis.”
Because the NSA was basically treating all Section 215 data according to the rules governing EO 12333 in 2009 (indeed, at the beginning of this period, analysts couldn’t distinguish the source of the two authorizations), it subjected the data to a number of processes that did not fit under the authorization in the FISC orders — things like counts of all contacts and automatic chaining on identifiers believed to be the same user as one deemed to have met the Reasonable Articulable Standard. The End to End report finished in summer 2009 described one after another of these processes being shut down (though making it clear it wanted to resume them once it obtained FISC authorization). But even in these discussions, that redaction after “contact chaining” remained.
Even in spite of this persistent redaction, the public claims this was about contact chaining gave the impression that the pattern analysis not specifically authorized by the dragnet orders also got shut down.
The IG Reports that Savage liberated gives a better sense of precisely what the NSA was doing after it cleared up all its violations in 2009.
The Reports were ordered up by the FISC and covered an entire year of production (there was a counterpart of the Internet dragnet side, which was largely useless since so much of that dragnet got shut down around October 30, 2009 and remained shut down during this review period).
The show several things:
It’s the last item of interest here.
The first thing to understand about the phone dragnet data is it could be queried two places: the analyst front-end (the name of which is always redacted), and a “Transaction Database” that got replaced with something else in 2011. (336)
Basically, when the NSA did intake on data received from the telecoms, it would create a table of each and every record (which is I guess where the “transaction” name came from), while also making sure the telecoms didn’t send illegal data like credit card information.
Doing queries in the Transaction Database bypassed search restrictions. The March 2010 audit discovered a tech had done a query in the Transaction Database using a selector the RAS approval (meaning NSA had determined there was reasonable articulable suspicion that the selector had some tie to designated terrorist groups and/or Iran) of which had expired. The response to that violation, which NSA didn’t agree was a violation, was to move that tech function into a different department at NSA, away from the analyst function, which would do nothing to limit such restriction free queries, but would put a wall between analysts and techs, making it harder for analysts to ask techs to perform queries they would be unable to do.
Because the direct queries done for data integrity purposes were not subject to auditing under the phone dragnet orders, the monthly reports distinguished between those and analyst queries, the latter of which were audited to be sure they were RAS approved. But as the April 2010 report and subsequent audits showed, analysts also would do an “ident lookup.” (83)
The report provided this classified/Five Eyes description of “ident lookups.”
The Emphatic Access Restriction was a tool implemented in 2009 to ensure that analysts only did queries on RAS-approved selectors. What this detail reveals is that, rather than consulting a running list somewhere to see whether a selector was RAS approved, analysts would instead try to query, and if the query failed, that’s how they would learn the selector was not RAS approved.
We can’t be sure, but that suggests RAS approval went beyond simple one-to-one matching of identifiers. It’s possible an ident lookup needed to query the database to see if the data showed a given selector (say, a SIM card) matched another selector (say, a phone number) which had been RAS approved. It might go even further, given that NSA had automatically done searches on “correlated” numbers (that is, on a second phone number deemed to belong to the same person as the approved primary number that had been RAS approved). At least, that’s something NSA had done until 2009 and said it wanted to resume.
In other words, the fact that an ident lookup query queried the data and not just a list of approved selectors suggests it did more than just cross-check the RAS approval list: at some level it must tested the multiple selectors associated with one user to see if the underlying selectors were, by dint of the user himself being approved, themselves approved.
Indent lookups appear fairly often in these IG reports. Less frequent is an entirely redacted kind of query such as described but redacted in the September 2010 report. (166)
The footnote description of that query is classified Top Secret NOFORN and entirely redacted.
I have no idea what that query would be, but it’s clear it is done on the analyst facing interface, and only on RAS approved selectors.
The timing of this third query is interesting. Such queries appear in the September and October 2010 audits. That was a period when, in the wake of the July 2010 John Bates approval to resume the Internet dragnet, they were aligning the two programs again (or perhaps even more closely than they had been in 2009). It also appears after a new selector tracking tool got introduced in June 2010. That said, I’m unaware of anything in the phone dragnet orders that would have expanded the kinds of queries permitted on the phone dragnet data.
We know they had used the phone dragnet until 2009 to track burner phones (that is, matching calling patterns of selectors unknown to have a connection to determine which was a user’s new phone). We know that in November 2012, FISC approved an automated query process, though NSA never managed to implement it technically before Obama decided to shut down the dragnet. We also know that in 2014 they started admitting they were also doing “connection” chaining (which may be burner phone matching or may be matching of selectors). All are changes that might relate to more extensive non-chain querying.
We also don’t know whether this kind of query persisted from 2010 until last year, when the dragnet got shut down. I think it possible that the reasons they shut down the Internet dragnet in 2011 may have implicated the phone dragnet.
The point, though, is that at least by 2010, NSA was doing non-chain queries of the entire dragnet dataset that it considered to be approved under the phone dragnet orders. That suggests by that point, NSA was using the bulk set as a set already (or, more accurately, again, after the 2009 violations) by September 2010.
Last March James Clapper explained the need to retain records for a period of time, he justified it by saying you needed the historical data to discern patterns.
Q: And just to be clear, with the private providers maintaining that data, do you feel you’ve lost an important tool?
Clapper: Not necessarily. It will depend though, for one, retention period. I think, given the attitude today of the providers, they will probably do all they can to minimize the retention period. Which of course, from our standpoint, lessens the utility of the data, because you do need some — and we can prove this statistically — you do need some historical data in order to, if you’re gonna discern a pattern. And again, 215 to me, is much like my fire insurance policy. You know, my house has never burned down but every year I buy fire insurance just in case.
This would be consistent with the efforts to use the bulk dataset to find burner identities, at a minimum. It would also be consistent with Marco Rubio et al’s squeals about needing the historical data. And it would be consistent with the invocation of the National Academy of Sciences report on bulk data (though not on the phone dragnet), which NSA’s General Counsel raised in a Lawfare post today.
In other words, contrary to public suggestions, it appears NSA was using the phone dragnet to conduct pattern analysis that required the bulk dataset. That’s not surprising, though it is something the NSA suggested they weren’t doing.
They surely are still doing that on the larger EO 12333 dataset, along with a lot more complex kinds of analysis. But it seems some, like Rubio, either think we need to return to such bulk pattern analysis, or has used the San Bernardino attack to call to resume more intrusive spying.
Update: One of the other things the IG Reports make clear is that NSA was (unsurprisingly) collecting records of non-simultaneous telephone transactions. That became an issue when, in 2011, NSA started to age-off 5 year old data, because they would have some communication chains that reflected communications that were more than 5 years old but which were obtained less than 5 years before.
My guess is this reflects texting chains that continued across days or weeks.
On November 24, Judge Michael Mosman approved the government’s request to hold onto the Section 215 phone dragnet data for technical assurance purposes for three months, as well as to hold the data to comply with a preservation order in EFF’s challenge to the phone dragnet (though as with one earlier order in this series, Thomas Hogan signed the order for Mosman, who lives in Oregon). While the outcome of the decision is not a surprise, the process bears some attention, as it’s the first time a truly neutral amicus has been involved in the FISC process (though corporations, litigants, and civil rights groups have weighed in various decisions as amici).
As I noted in September when Mosman first appointed Burton, it wasn’t entirely clear what the FISC was asking him to review. In his order, Mosman explains that he “directed him to address whether the government’s above-described requests to retain and use BR metadata after November 28, 2015, are precluded by section 103 of the USA FREEDOM Act or any other provision of that Act.”
Burton took this to be largely a question about minimization procedures.
Instead, the Act provides that the Court shall decide issues concerning the use, retention, dissemination, and eventual destruction of the tangible things collected under the FISA business records statute as part of its oversight of the statutorily mandated minimization procedures.
He then pointed to a number of the FISC’s more assertive oversight moments over the NSA to argue that the FISC has fairly broad authorities to review minimization procedures.
Although the government is required to enumerate minimization procedures addressing the use, retention, dissemination, and (now) ultimate destruction of the metadata in its applications to the Court, the Court’s review of those procedures is not simply ministerial. And, indeed, Judge Walton’s 2009 orders, cited above, addressing deficiencies in the administration of the call detail record program made clear that the FISA Court may impose more robust minimization procedures. See also Kris, Bulk Collection at 15-17 (discussing FISA Court’s imposition of new restrictions to the telephony program). Likewise, the Court may decline to endorse procedures sought by the government See Opinion at 11-2, In re Application of the FBI for an Order Requiring the Production of Tangible Things, Docket No. BR 14-01 (March 7, 2014) (denying the government’s motion to modify the minimization procedures), amended, Opinion at S, Jn re Application of the FBI/or an Order Requiring the Production a/Tangible Things, Docket No. BR 14-01(March12, 2014). Similarly, Judge Bates found substantial deficiencies in the NSA’ s minimization procedures in Jn Re [Redacted}, 2011 WL l 0945618, at *9 (FISA Ct. Oct. 3, 2011) (Bates J.) (fmding NSA minimization procedures insufficient and inconsistent with the Fourth Amendment). As a result, the NSA amended its procedures, including reducing the data retention in issue in that case (under a differentFISA statute) from five to two years. See In Re [Redacted], 2011WL10947772, at •s (FISA Ct. Nov. 30, 2011) (Bates J.).
Particularly in the case of the two PRTT orders, the government has actually challenged FISC’s roles in imposing minimization procedures (though admittedly FISC’s role under that authority is less clear cut than under Section 215).
Burton argued that USA Freedom Act (which he abbreviated USFA) made that role even stronger.
But the USFA augmented this minimization review authority even more and dispels any suggestion that the Court may not modify the minimization procedures articulated in the government’s application. The statute’s fortification of Judicial Review provisions makes clear that Congress intended for the FISA Court to oversee these issues in the context of imposing minimization procedures that balance the government’s national security interests with privacy interests, including specifically providing for the prompt destruction of tangible things produced under the business records provisions.10 Significantly, USF A § 104 empowers the Court to assess and supplement the government’s proposed minimization procedures:
Nothing in this subsection shall limit the authority of the court established under section 103(a) to impose additional, particularized minimization procedures with regard to the production, retention, or dissemination of nonpublicly available information concerning unconsenting United States persons, including additional particularized procedures related to the destruction of information within a reasonable time period. USFA § 104 (a)(3) (now codified at 50 U.S.C. §1861(g)(3)(emphasis supplied).
That provision applies to all information the government obtains under the business records procedure, not just call detail records. u Moreover, that amendment, set forth in USFA § 104, went into effect immediately, unlike the 180-day transition period for the revisions to the business records sections. See USFA § 109 (amendments made by §§ 101-103 take effect 180 days after enactment).12
As I said, that’s the kind of argument the government has been arguing against for 11 years, most notably in the two big Internet dragnet reauthorizations (admittedly, FISC’s role in minimization procedures there is less clear, but there is similar language about not limiting the authority of the court).
Having laid out the (as he sees it) expansive authority to review minimization procedures, Burton then does something delightful.
He poses a lot of questions that should have been asked 9 years ago.
Because of the significant privacy concerns that motivated Congress to amend the bulk collection provisions of the statute, however, the undersigned respectfully submits that, the Court should consider requiring the government to answer more fully fundamental questions regarding:
- The current conditions, location, and security for the data archive.
- The persons and entities to whom the NSA has given access to information provided under this program and whether that shared information will also be destroyed under the NSA destruction plan (and, if not, why not?).
- What oversight is in place to ensure that access to the database is not “analytical” and what the government means by “non-analytical.”
- Why testing of the adequacy of new procedures was not completed by the NSA (and whether it was even initiated) during the 180-day transition period.
- How the government intends to destroy such information after February 29, 2016, (its proposed extinction date for the database) independent of the resolution of any litigation holds.
- Whether the contemplated destruction will include only data that the government has collected or will include all data that it has analyzed in some fashion.
Remember, by the time Burton wrote this, he had read at least the application for the final dragnet order, and the answers to these questions were not clear from that (which is where the government lays out its more detailed minimization procedures). Public releases have made me really concerned about some of them, such as how to protect non-analytical queries from being used for analytical purposes. NSA has had tech people do analytical queries in the past, and it doesn’t audit tech activities. Similarly, when the NSA destroyed the Internet dragnet data in 2011, NSA’s IG wasn’t entirely convinced it all got destroyed, because he couldn’t see the intake side of things. So these are real issues of concern.
Burton also asked questions about the necessity behind keeping data for the EFF challenges rather than just according the plaintiffs standing.
If this Court chooses to follow Judge Walton’s approach and defer to the preservation orders issued by the other courts, the Court nonetheless should address a number of questions before deciding whether to grant the government’s preservation request:
- Why has the government been unable to reach some stipulation with the plaintiffs to preserve only the evidence necessary for plaintiffs to meet their standing burden? Consider whether it is appropriate for the government to retain billions of irrelevant call detail records involving millions of people based on, what undersigned understands from counsel involved in that litigation, the government’s stubborn procedural challenges to standing — a situation that the government has fostered by declining to identify the particular telecommunications provider in question and/or stipulate that the plaintiff is a customer of a relevant provided.
- As Judge Walton identified when he first denied the modification of the minimization procedures to extend the duration of preservation, the continued retention of the data at issue subjects it to risk of misuse and improper dissemination. The government should have to satisfy the Court of the security of this information in plain and meaningful terms.
(Notice how he assumes the plaintiffs might have standing which, especially for First Unitarian Church plaintiff CAIR, they should.)
Finally, perhaps channeling the justified complaint of all the tech people who review these kinds of policy questions, Burton suggested the FISC really ought to be consulting with a tech person.
This case, due to the relatively limited period of time sought by the government to accomplish its stated narrow purpose, likely does not require a difficult assessment of the reasonableness of the government’s technical retention request. To evaluate even such a limited request, however, the Court may wish to consider availing itself of technical expertise from national security experts or computer technology experts. Technical expertise is an amicus category contemplated by Congress in its reform of the FISA statutes. 50 U.S.C. § 1803 (i)(2)(B), as amended by USF A Section 401. That section alone suggests congressional expectation of greater judicial oversight of the government’s surveillance program and requests. See USF A § 401; see also Kris, Bulk Collection at 3 7 (contemplating theoretical procedures for cross-examining NSA engineers as one example of the challenges in implementing a more adversarial system for the FISA Court).
Burton ended his memo reiterating his recommendation that FISC get more information.
In light of the significant privacy interests affected by the creation and retention of the database, the undersigned urges the Court as part of its statutory oversight of the minimization procedures to demand full and meaningful information concerning the condition of the data at issue, the data’s security, and its contemplated destruction as a condition of any retention beyond November 28, 2015.
Predictably, the government balked at Burton’s invitation to use his expansive reading of the authority of the FISC to review minimization procedures to bolster the current ones.
Amicus curiae’ s analysis of Section 104 of the USA FREEDOM Act could be interpreted as suggesting an opportunity for the Court to re-examine the minimization procedures applicable for other business records productions in this proceeding. Consistent with the Court’s order appointing amicus curiae, the Government has limited its response to the issue identified in that order.
Frankly, I’m not sure what the government distinguishes between Burton’s proposal to reexamine existing minimization procedures and what is covered by the order in question, because they do respond to a number of the questions he raised in his brief.
For example, they provide these details about where the dragnet lives (which, as it turns out, is at Fort Meade, not the UT data center).
As described in the Application in docket number BR 15-99 and prior docket numbers, NSA stores and processes the bulk call detail records in repositories within secure networks under NSA’ s control. Those repositories (servers, networked storage devices, and backup tapes in locked containers) are located in NSA’s secure, access-controlled facilities at Fort George G. Meade, Maryland. As further described in those applications, NSA restricts access to the records to authorized personnel who have received appropriate and adequate training. Electronic access to the call detail records requires a user authentication credential. Physical access to the location where NSA stores and processes the call detail records requires an approval by NSA management and must be conducted in teams of no less than two persons.
Also note that there is currently a requirement that techs access the raw data in two person teams. That is likely a change that post-dates Snowden.
Curiously, the NSA says they can destroy all the phone dragnet data in a month.
NSA anticipates it can complete destruction of the bulk call detail records and related chain summaries within one month of being relieved of its litigation preservation obligations.
They appear to have taken far less time to destroy the Internet dragnet data, further supporting the appearance they did it very hastily to avoid having to report back to John Bates on the status of their dragnet.
Finally, they make clear what had already been clear to me: the existing query results will remain at NSA.
Information obtained or derived from call detail records which has been previously disseminated in accordance with approved minimization procedures will not be recalled or destroyed.2 Also, select query results generated by pre-November 29, 2015, queries of the bulk records that formed the basis of a dissemination in accordance with approved minimization procedures will not be destroyed.
2 This practice does not differ from similar circumstances where, for example Court-authorized electronic surveillance and/or physical search authorities under Title I or III expire. While raw (unminimized) information is handled and destroyed in accordance with applicable minimization procedures, prior authorized disseminations and the material underpinning those disseminations are not recalled or otherwise destroyed.
This means that everyone within two or three degrees of a target that the NSA has found interesting — potentially over the last decade — will remain available and subject to NSA’s analytical toys from here on out.
Let’s hope CAIR gets standing to challenge what has happened to their IDs then.
Which may be why the government gets snippiest in response to Burton’s question about why they’re going to keep billions of phone records rather than just reach some accommodation with EFF.
The suggestions by amicus curiae that this Court address (or perhaps even resolve) significant substantive questions at issue in underlying civil litigation,, see Amicus Mem. of Law at 27, are exactly the kinds of inquiries the Court previously recognized were inappropriate for it to resolve. Opinion and Order, docket number BR 14-01at5 (“it is appropriate for [the district court for the Northern District of California], rather than the FISC, to determine what BR metadata is relevant to that litigation”). This Court should adopt the same view. In particular, the suggestion that the Government disclose national security information concerning the identity of providers, information subject to a pending state secrets privilege assertion, is inappropriate, and the suggestion by amicus that the government stipulate to Article III standing in those cases is unfounded as a matter of law. Finally, the suggestion that preservation of bulk call detail records can be limited solely to the plaintiffs in multiple pending putative class actions is entirely unworkable. For the reasons more particularly set out above, until the Government is relieved of its preservation obligations, the data is secure.
Which leads me to the detail that makes me suspect there’s a second Burton filing the government hasn’t released (I’ve asked NSD but gotten no answer, and in his opinion Mosman says only “Mr. Burton and the government submitted briefs addressing this question,” leaving open the possibility Burton submitted two): After finding no reason to hold a hearing on the issue of restarting the dragnet during the summer, Mosman did hold a hearing here (though it’s not clear whether Burton attended or not). At the hearing, Mosman ordered the government to try to come up with a way to destroy the dragnets, which it will do by January 8.
During the hearing held on November 20, 2015, the Court directed the government to submit its assessment of whether the cessation of bulk collection on November 28, 2015, will moot the claims of the plaintiffs in the Northern District of California litigation relating to the BR Metadata program and thus provide a basis for moving to lift the preservation orders. The Court further directed the government to address whether, even if the California plaintiffs’ claims are not moot, there might be a basis for seeking to lift the preservation orders with respect to the BR Metadata that is not associated with the plaintiffs. The government intends to make its submission on these issues by January 8, 2016.
And, as Mosman’s opinion makes clear, he ordered them to write up a free-standing copy of the minimization procedures that will govern the dragnet data retained from here on out.
The minimization procedures that the government proposes using after the production ceases on November 28, 2015 are in important respects substantially more restrictive than those currently in effect. The procedures that will apply after November 28, which were initially included as part of the broader set of procedures set forth in the application, were resubmitted by the government in a standalone document on November 24, 2015 (“November 24, 2015 Minimization Procedures”).
They would have submitted them on the day Mosman (via Hogan’s signature) approved the request to keep the data. In other words, Mosman made the government generate a document to make it crystal clear the more restrictive rules apply to the dragnet going forward.
Whether it was Mosman’s intent when he appointed Burton or not (remember, for better and worse, under USAF the amicus has to do what the FISC asks), his appointment served several purposes.
First, it set Mosman up to make it very clear that the FISC sees the minimization procedures required under USAF do give the FISC expanded authority.
The USA FREEDOM Act made several minimization-related changes to Section 1861. For instance, Section 1861 now provides that, before granting a business records application, the Court must expressly find that the minimization procedures put forth by the government “meet the definition ofminimiz.ation procedures under subsection (g).” See Pub. L. No. 114-23, § 104(a)(l), 129 Stat. at 272. This change is not substantive, however, as such a finding was previously implicit in the broader finding required by Section 1861 ( c )(1) – i.e, “that the application meets the requirements of subsection (a) and (b).” Among the requirements of subsection (b) was – and still is – the requirement that the application include an enumeration of Attorney General-approved minimization procedures that meet the definition set forth in subsection (g). Another change is the addition of a “rule of construction” confirming the Court’s authority “to impose additional, particularized minimization procedures with regard to the production, retention, or dissemination” of certain information regarding United States persons, including “procedures related to the destruction of information within a reasonable time period.” See id. § 104(a)(2), 129 Stat. at 272. A third new provision that takes effect on November 29, 2015, states that orders compelling the ongoing, targeted production of “call detail records” must direct the government to adopt minimization procedures containing certain requirements relating to the destruction of such records. See id Pub. L. No. 114-23, § 10l(b)(3)(F)(vii), 129 Stat. at 270-71.
Remember, it took 7 years — including 4 years of FISC-imposed minimization requirements and reviews — before the government met the requirements of the law as passed in 2006. Significantly, Burton got a classified version of the IG report laying out that delay to read, so he surely knows more about that delay than we do.
In addition, Burton set up the FISC to demand more assurances from the government and — potentially — to push it to come to some more reasonable accommodation with EFF than they otherwise might. Remember, when presiding over the criminal case of Raez Qadir Khan, Mosman was going to grant CIPA discovery on the surveillance used to catch Khan, some of which almost certainly included one (Stellar Wind) or another (the PRTT Internet dragnet) of the illegal dragnets, which led almost immediately to a plea deal.
I’m, frankly, pleasantly surprised. Whether it was Mosman’s intent or not, even picking someone without an obvious brief for privacy, Burton helped Mosman shore up the authority of the FISC to ride herd over government spying (and given Judge Hogan’s involvement along the way, he presumably did so with the assent of the presiding FISC judge).
In any case, Mosman was happy with how it all worked out, as he included this footnote in his opinion.
The Court wishes to thank Mr. Burton for his work in this matter. His written and oral presentations were extremely informative to the Court’s consideration of the issues addressed herein. The Court is grateful for his willingness to serve in this capacity.
John Bates, speaking inappropriately on behalf of the FISA Court during USAF debates, squealed mightily about the role an amicus had. Admittedly, the current form is closer to what Bates (who I’ve always suspected was speaking on behalf of John Roberts more than the court) wanted than what reformers wanted.
But at least in this instance, the amicus helped the FISC shore up its authority vis a vis the government.
I noted the other day that an NSA IG document liberated by Charlie Savage shows the agency had 4 reasons to shut down the domestic Internet (PRTT) dragnet, only one of which is the publicly admitted reason — that NSA could accomplish what it needed to using SPCMA and FAA collection.
I’m fairly sure another of the reasons NSA shut down the dragnet is because of dissemination restrictions that probably got newly reinvigorated in mid-2011.
I laid out a timeline of events leading up to the shutdown of the Internet dragnet here. I’ve added one date: that of the draft training program, several modules of which are dated October 17, 2011, released under FOIA (given other dates in the storyboard, the program had clearly been in development as early as November 2010). How odd is that? The NSA was just finalizing a training program on the Internet (and phone) dragnet as late as 6 weeks before NSA hastily shut it down starting in late November 2011. The training program — which clearly had significant Office of General Counsel involvement — provides a sense of what compliance issues OGC was emphasizing just as NSA decided to shut down the Internet dragnet.
The training program was done in the wake of two things: a series of audits mandated by the FISA Court (see PDF 36) that lasted from May 2010 until early 2011, and the resumption of the PRTT Internet dragnet between July and October 2010.
The series of audits revealed several things. First, as I have long argued was likely, the technical personnel who monitor the data for integrity may also use their access to make inappropriate queries, as happened in an incident during this period (see PDF 95 and following); I plan to return to this issue. In addition, at the beginning of the period — before a new selector tracking tool got introduced in June 2010 — NSA couldn’t track whether some US person selectors had gotten First Amendment review. And, throughout the audit period, the IG simply didn’t review whether less formalized disseminations of dragnet results followed the rules, because it was too hard to audit. The final report summarizing the series of audits from May 2011 (as well as the counterpart one covering the Internet dragnet) identified this as one of the weaknesses of the program, but NSA wanted to manage it by just asking FISC to eliminate the tracking requirements for foreign selectors (see PDF 209).
I found this blasé attitude about dissemination remarkable given that in June 2009, Reggie Walton had gotten furious with NSA for not following dissemination restrictions, after which NSA did it again in September 2009, and didn’t tell Walton about it, which made him furious all over again. Dissemination restrictions were something Walton had made clear he cared about, and NSA IG’s response was simply to say auditing for precisely the kind of thing he was worried about — informal dissemination — was too hard, so they weren’t going to do it, not even for the audits FISC (probably Walton himself) ordered NSA to do to make sure they had cleaned up all the violations discovered in 2009.
Meanwhile, when NSA got John Bates to authorize the resumption of the dragnet (he signed the order in July 2010, but it appears it didn’t resume in earnest until October 2010), they got him to approve the dissemination of PRTT data broadly within NSA. This was a response to a Keith Alexander claim, made the year before, that all product lines within NSA might have a role in protecting against terrorism (see PDF 89).
In other words, even as NSA’s IG was deciding it couldn’t audit for informal dissemination because it was too hard to do (even while acknowledging that was one of the control weaknesses of the program), NSA asked for and got FISC to expand dissemination, at least for the Internet dragnet, to basically everyone. (The two dragnets appear to have been synched again in October 2010, as they had been for much of 2009, and when that happened the NSA asked for all the expansions approved for the Internet dragnet to be applied to the phone dragnet.)
Which brings us to the training program.
There are elements of the training program that reflect the violations of the previous years, from an emphasis on reviewing for access restrictions to a warning that tech personnel should only use their sysadmin access to raw data for technical purposes, and not analytical ones.
But the overwhelming emphasis in the training was on dissemination — which is a big part of the reason the NSA used the program to train analysts to rerun PATRIOT-authorized queries under EO 12333 so as to bypass dissemination restrictions. As noted in the screen capture above, the training program gave a detailed list of the things that amounted to dissemination, including oral confirmation that two identifiers — even by name (which of course confirms that these phone numbers are identifiable to analysts) — were in contact.
In addition, any summary of that information would also be a BR or PR/TT query result. So, if you knew that identifier A belonged to Joe and identifier B belonged to Sam, and the fact of that contact was derived from BR or PR/TT metadata, if you communicate orally or in writing that Joe talked to Sam, even if you don’t include the actual e-mail account or telephone numbers that were used to communicate, this is still a BR or PR/TT query result.
The program reminded that NSA has to report every dissemination, no matter how informal.
This refers to information disseminated in a formal report as well as information disseminated informally such as written or oral collaboration with the FBI. We need to count every instance in which we take a piece of information derived from either of these two authorities and disseminate it outside of NSA.
Normally an NSA product report is the record of a formal dissemination. In the context of the BR and PR/TT Programs, an official RFI response or Analyst Collaboration Record will also be viewed as dissemination. Because this FISC requirement goes beyond the more standard NSA procedures, additional diligence must be given to this requirement. NSA is required to report disseminations formal or informal to the FISC every 30 days.
I’m most interested in two other aspects of the training. First, it notes that not all queries obtained via the dragnet will be terrorism related.
It might seem as though the information would most certainly be counterterrorism-related since, due to the RAS approval process, you wouldn’t have this U.S. person information from a query of BR or PR/TT if it weren’t related to counterterrorism. In the majority of cases, it will be counterterrorism-related; however, the nature of the counterterrorism target is that it often overlaps with several other areas that include counternarcotics, counterintelligence, money laundering, document forging, people and weapons trafficking, and other topics that are not CT-centric. Thus, due to the fact that these authorities provide NSA access to a high volume of U.S. person information for counterterrorism purposes, the Court Order requires an explicit finding that the information is in fact related to counterterrorism prior to dissemination. Therefore, one of the approved decision makers must document the finding using the proper terminology. It must state that the information is related to counterterrorism and that it is necessary to understand the counterterrorism information.
Remember, this training was drafted in the wake of NSA’s insistence that all these functional areas needed to be able to receive Internet dragnet data, which, of course, was just inviting the dissemination of information for reasons other than terrorism, especially given FISC’s permission to use the dragnet to track Iranian “terrorism.” Indeed, I still think think it overwhelmingly likely Shantia Hassanshahi got busted for proliferation charges using the phone dragnet (during a period when FISC was again not monitoring NSA very closely). And one of the things NSA felt the need to emphasize a year or so after NSA started being able to share this “counterterrorism” information outside of its counterterrorism unit was that they couldn’t share information about money laundering or drug dealing or … counterproliferation unless there was a counterterrorism aspect to it. Almost as if it had proven to be a problem.
The training program warns that results may not be put into queriable tools that untrained analysts have access to.
Note the absolutely hysterical review comment that said there’s no list of which tools analysts couldn’t use with 215 and PRTT dragnet results. Elsewhere, the training module instructs analysts to ask their manager, which from a process standpoint is a virtual guarantee there will be process violations.
This is interesting for two reasons. First, it suggests NSA was still getting in trouble running tools they hadn’t cleared with FISC (the 215 IG Reports also make it clear they were querying the full database using more than just the contact-chaining they claim to have been limited to). Remember there were things like a correlations tool they had to shut down in 2009.
But it’s also interesting given the approval, a year after this point, of an automatic alert system for use with the phone dragnet (which presumably was meant to replace the illegal alert system identified in 2009).
In 2012, the FISA court approved a new and automated method of performing queries, one that is associated with a new infrastructure implemented by the NSA to process its calling records.68 The essence of this new process is that, instead of waiting for individual analysts to perform manual queries of particular selection terms that have been RAS approved, the NSA’s database periodically performs queries on all RAS-approved seed terms, up to three hops away from the approved seeds. The database places the results of these queries together in a repository called the “corporate store.”
The ultimate result of the automated query process is a repository, the corporate store, containing the records of all telephone calls that are within three “hops” of every currently approved selection term.69 Authorized analysts looking to conduct intelligence analysis may then use the records in the corporate store, instead of searching the full repository of records.70
That is, in 2011, NSA was moving towards such an automated system, which would constitute a kind of dissemination by itself. But it wasn’t there yet for the PATRIOT authorized collection. Presumably it was for EO 12333 collection.
As it happened, NSA never did fulfill whatever requirements FISC imposed for using that automatic system with phone dragnet information, and they gave up trying in February 2014 when Obama decided to outsource the dragnet to the telecoms. But it would seem limits on the permission to use other fancy tools because they would amount to dissemination would likely limit the efficacy of these dragnets.
Clearly, in the weeks before NSA decided to shut down the PRTT dragnet, its lawyers were working hard to keep the agency in compliance with rules on dissemination. Then, they stopped trying and shut it down.
Both the replacement of PRTT with SPCMA and 702, and the replacement of the 215 dragnet with USAF, permit the government to disseminate metadata with far looser restrictions (and almost none, in the case of 702 and USAF metadata). It’s highly likely this was one reason the NSA was willing to shut them down.
This will be the second of three posts on the NSA IG’s failures to correct problems with the Internet (PRTT) dragnet. In the first, I showed how quickly NSA nuked the PRTT (or at least claimed to) after John Bates ruled, a second time, that NSA could not illegally wiretap the content of Americans’ communications. Here, I’ll examine another IG Report, completed earlier in 2011 and also liberated by Charlie Savage, that appears to show the PRTT dragnet was hunky dory just weeks before it became clear again that it was not.
The report (see PDF 4-23) must date to between March 15 and May 25, 2011. It was related to a series of reports on the phone dragnet (these reports appear to have been solicited by or encouraged by Reggie Walton in the wake of the 2009 dragnet problems) that Savage liberated earlier this year. It lists all those reports on pages A-2 to A-3. But it lists the final, summary report in that series, (ST-10-0004L), as a draft, dated March 15, 2011. The copy provided to Savage is the final, dated May 25, 2011 (see PDF 203).
The reason for doing this, the PRTT report, is curious. The report notes “we began this review in [redacted, would be some time in summer 2009] but suspended it when NSA allowed the PR/TT Order to expire.” That is, this was the report that got started, but then halted, when someone discovered that every single record the NSA had collected under the program included categories of information violating the rules set by FISC in 2004.
But then NSA started a review of the phone dragnet covering all the activity in 2010 (reflected in monthly reports in Savage’s earlier release). So the NSA decided to do a review of PRTT at the same time. But remember: the Internet dragnet was shut down until at least July 2010, when John Bates authorized its resumption, and it took some time to turn the dragnet back on. That means NSA conducted a review of a dragnet that was largely on hiatus or just resuming. During the review period, both the phone and Internet dragnet reflect few finalized reports based on either dragnet. Indeed, it appears likely that there were no phone dragnet disseminations in August 2010 (see 155). There are probably two explanations for that. It suggests that after Reggie Walton told NSA they had to start following the rules, the amount of intelligence they got from the dragnet appears to have gone down from both the phone and Internet dragnet. But there may be a reason for that: we know that in 2011 NSA was training analysts to re-run queries that came up in both FISA and EO 12333 searches using EO 12333, so the results could be disseminated more broadly. So it’s likely that a lot of what had been reports reporting FISA authorized data before 2009 (which didn’t always follow FISC’s rules) started getting disseminated as EO 12333 authorized reports afterward. Still, in the case of the Internet dragnet reviewed for this report, “the dissemination did not contain PR/TT-derived USP information” so they “did not formally test dissemination objectives” (see footnote 1). None of the reports on the US Internet dragnet reviewed in some period in 2010 included US person data.
So much for collecting all of Americans’ email records to catch Americans, I guess.
All that said, both the Internet and phone dragnet found that the dragnets had adequate controls to fulfill the requirements of the FISC orders, but did say (this is laid out in unredacted form more explicitly in the phone dragnet report) that the manual monitoring of dissemination would become unworkable if analysts started using the dragnet more. The phone dragnet reports also suggest they weren’t good at monitoring less formal disseminations (via email or conversation), and by the time of these summary reports, NSA was preparing ask FISC to change the rules on reporting of non-US person dissemination. Overall in spring 2011, NSA’s IG found, the process worked according to the rules, but in part only because it was so little used.
That’s the assessment of the PRTT dragnet as of sometime between March and May 2011, less than 9 months before they’d nuke the dragnet really quickly, based mostly off a review of what NSA was doing during a period when the dragnet was largely inactive.
Which is all very interesting, because sometime before June 30, 2011 there was a PRTT violation that got reported — in a far more extensive description than the actual shut down of the dragnet in 2009 — to Intelligence Oversight Board. (see PDF 10)
There’s no mention of reporting to Congress on this, which is interesting because PATRIOT Act was being reauthorized again during precisely this period, based off notice, dated February 2, 2011, that the compliance problems were largely solved.
So here’s what happened: After having had its IG investigation shut down in fall 2009 because NSA had never been in compliance with limits on the PRTT dragnet, NSA’s IG tried again during a period when the NSA wasn’t using it all that much. It gave NSA a clean bill of health no earlier than March 15, 2011. But by June 30, 2011, something significant enough to get reported in two full paragraphs to IOB happened.
It turns out things weren’t quote so hunky dory after all.
The question of whether NSA can keep its Section 215 dragnet data past November 28 has been fully briefed for at least 10 days, but Judge Michael Mosman has not yet decided whether the NSA can keep it — at least not publicly. But given what the NSA IG Report on NSA’s destruction of the Internet dragnet says (liberated by Charlie Savage and available starting on PDF 60), we should assume the NSA may be hanging onto that data anyway.
This IG Report documents NSA’s very hasty decision to shut down the Internet dragnet and destroy all the data associated with it at the end of 2011, in the wake of John Bates’ October 3, 2011 opinion finding, for the second time, that if NSA knew it had collected US person content, it would be guilty of illegal wiretapping. And even with the redactions, it’s clear the IG isn’t entirely certain NSA really destroyed all those records.
The report adds yet more evidence to support the theory that the NSA shut down the PRTT program because it recognized it amounted to illegal wiretapping. The evidence to support that claim is laid out in the timeline and working notes below.
The report tells how, in early 2011, NSA started assessing whether the Internet dragnet was worth keeping under the form John Bates had approved in July 2010, which was more comprehensive and permissive than what got shut down around October 30, 2009. NSA would have had SPCMA running in big analytical departments by then, plus FAA, so they would have been obtaining these benefits over the PRTT dragnet already. Then, on a date that remains redacted, the Signals Intelligence Division asked to end the dragnet and destroy all the data. That date has to post-date September 10, 2011 (that’s roughly when the last dragnet order was approved), because SID was advising to not renew the order, meaning it happened entirely during the last authorization period. Given the redaction length it’s likely to be October (it appears too short to be September), but could be anytime before November 10. [Update: As late as October 17, SID was still working on a training program that covered PRTT, in addition to BRFISA, so it presumably post-dates that date.] That means that decision happened at virtually the same time or after, but not long after, John Bates raised the problem of wiretapping violations under FISA Section 1809(a)(2) again on October 3, 2011, just 15 months after having warned NSA about Section 1809(a)(2) violations with the PRTT dragnet.
The report explains why SID wanted to end the dragnet, though three of four explanations are redacted. If we assume bullets would be prioritized, the reason we’ve been given — that NSA could do what it needed to do with SPCMA and FAA — is only the third most important reason. The IG puts what seems like a non sequitur in the middle of that paragraph. “In addition, notwithstanding restrictions stemming from the FISC’s recent concerns regarding upstream collection, FAA §702 has emerged as another critical source for collection of Internet communications of foreign terrorists” (which seems to further support that the decision post-dated that ruling). Indeed, this is not only a non sequitur, it’s crazy. Everyone already knew FAA was useful. Which suggests it may not be a non sequitur at all, but instead something that follows off of the redacted discussions.
Given the length of the redacted date (it is one character longer than “9 December 2011”), we can say with some confidence that Keith Alexander approved the end and destruction of the dragnet between November 10 and 30 — during the same period the government was considering appealing Bates’ ruling, close to the day — November 22 — NSA submitted a motion arguing that Section 1809(a)(2)’s wiretapping rules don’t apply to it, and the day, a week later, it told John Bates it could not segregate the pre-October 31 dragnet data from post October 31 dragnet data.
Think how busy a time this already was for the legal and tech people, given the scramble to keep upstream 702 approved! And yet, at precisely the same time, they decided they should nuke the dragnet, and nuke it immediately, before the existing dragnet order expired, creating another headache for the legal and tech people. My apologies to the people who missed Thanksgiving dinner in 2011 dealing with both these headaches at once.
Not only did NSA nuke the dragnet, but they did it quickly. As I said, it appears Alexander approved nuking it November 10 or later. By December 9, it was gone.
At least, it was gone as far as the IG can tell. As far as the 5 parts of the dragnet (which appear to be the analyst facing side) that the technical repository people handled, that process started on December 2, with the IG reviewing the “before” state, and ended mostly on December 7, with final confirmation happening on December 9, the day NSA would otherwise have had to have new approval of the dragnet. As to the the intake side, those folks started destroying the dragnet before the IG could come by and check their before status:
However, S3 had completed its purge before we had the opportunity to observe. As a result we were able to review the [data acquisition database] purge procedures only for reasonableness; we were not able to do the before and after comparisons that we did for the TD systems and databases disclosed to us.
Poof! All gone, before the IG can even come over and take a look at what they actually had.
Importantly, the IG stresses that his team doesn’t have a way of proving the dragnet isn’t hidden somewhere in NSA’s servers.
It is important to note that we lack the necessary system accesses and technical resources to search NSA’s networks to independently verify that only the disclosed repositories stored PR/TT metadata.
That’s probably why the IG repeatedly says he is confirming purging of the data from all the “disclosed” databases (@nailbomb3 observed this point last night). Perhaps he’s just being lawyerly by including that caveat. Perhaps he remembers how he discovered in 2009 that every single record the NSA had received over the five year life of the dragnet had violated Colleen Kollar-Kotelly’s orders, even in spite of 25 spot checks. Perhaps the redacted explanations for eliminating the dragnet explain the urgency, and therefore raise some concerns. Perhaps he just rightly believes that when people don’t let you check their work — as NSA did not by refusing him access to NSA’s systems generally — there’s more likelihood of hanky panky.
But when NSA tells — say — the EFF, which was already several years into a lawsuit against the NSA for illegal collection of US person content from telecom switches, and which already had a 4- year old protection order covering the data relevant to that suit, that this data got purged in 2011?
Even NSA’s IG says he thinks it did but he can’t be sure.
But what we can be sure of is, after John Bates gave NSA a second warning that he would hold them responsible for wiretapping if they kept illegally collecting US person content, the entire Internet dragnet got nuked within 70 days — gone!!! — all before anyone would have to check in with John Bates again in connection with the December 9 reauthorization and tell him what was going on with the Internet dragnet.
Update: Added clarification language.
Update: The Q2 2011 IOB report (reporting on the period through June 30, 2011) shows a 2-paragraph long, entirely redacted violation (PDF 10), which represents a probably more substantive discussion than the systematic overcollection that shut down the system in 2009.
Charlie Savage has a story that confirms (he linked some of my earlier reporting) something I’ve long argued: NSA was willing to shut down the Internet dragnet in 2011 because it could do what it wanted using other authorities. In it, Savage points to an NSA IG Report on its purge of the PRTT data that he obtained via FOIA. The document includes four reasons the government shut the program down, just one of which was declassified (I’ll explain what is probably one of the still-classified reasons probably in a later post). It states that SPCMA and Section 702 can fulfill the requirements that the Internet dragnet was designed to meet. The government had made (and I had noted) a similar statement in a different FOIA for PRTT materials in 2014, though this passage makes it even more clear that SPCMA — DOD’s self-authorization to conduct analysis including US persons on data collected overseas — is what made the switch possible.
It’s actually clear there are several reasons why the current plan is better for the government than the previous dragnet, in ways that are instructive for the phone dragnet, both retrospectively for the USA F-ReDux debate and prospectively as hawks like Tom Cotton and Jeb Bush and Richard Burr try to resuscitate an expanded phone dragnet. Those are:
Both the domestic Internet and phone dragnet limited their use to counterterrorism. While I believe the Internet dragnet limits were not as stringent as the phone ones (at least in pre 2009 shutdown incarnation), they both required that the information only be disseminated for a counterterrorism purpose. The phone dragnet, at least, required someone sign off that’s why information from the dragnet was being disseminated.
Admittedly, when the FISC approved the use of the phone dragnet to target Iran, it was effectively authorizing its use for a counterproliferation purpose. But the government’s stated admissions — which are almost certainly not true — in the Shantia Hassanshahi case suggest the government would still pretend it was not using the phone dragnet for counterproliferation purposes. The government now claims it busted Iranian-American Hassanshahi for proliferating with Iran using a DEA database rather than the NSA one that technically would have permitted the search but not the dissemination, and yesterday Judge Rudolph Contreras ruled that was all kosher.
But as I noted in this SPCMA piece, the only requirement for accessing EO 12333 data to track Americans is a foreign intelligence purpose.
Additionally, in what would have been true from the start but was made clear in the roll-out, NSA could use this contact chaining for any foreign intelligence purpose. Unlike the PATRIOT-authorized dragnets, it wasn’t limited to al Qaeda and Iranian targets. NSA required only a valid foreign intelligence justification for using this data for analysis.
The primary new responsibility is the requirement:
- to enter a foreign intelligence (FI) justification for making a query or starting a chain,[emphasis original]
Now, I don’t know whether or not NSA rolled out this program because of problems with the phone and Internet dragnets. But one source of the phone dragnet problems, at least, is that NSA integrated the PATRIOT-collected data with the EO 12333 collected data and applied the protections for the latter authorities to both (particularly with regards to dissemination). NSA basically just dumped the PATRIOT-authorized data in with EO 12333 data and treated it as such. Rolling out SPCMA would allow NSA to use US person data in a dragnet that met the less-restrictive minimization procedures.
That means the government can do chaining under SPCMA for terrorism, counterproliferation, Chinese spying, cyber, or counter-narcotic purposes, among others. I would bet quite a lot of money that when the government “shut down” the DEA dragnet in 2013, they made access rules to SPCMA chaining still more liberal, which is great for the DEA because SPCMA did far more than the DEA dragnet anyway.
So one thing that happened with the Internet dragnet is that it had initial limits on purpose and who could access it. Along the way, NSA cheated those open, by arguing that people in different function areas (like drug trafficking and hacking) might need to help out on counterterrorism. By the end, though, NSA surely realized it loved this dragnet approach and wanted to apply it to all NSA’s functional areas. A key part of the FISC’s decision that such dragnets were appropriate is the special need posed by counterterrorism; while I think they might well buy off on drug trafficking and counterproliferation and hacking and Chinese spying as other special needs, they had not done so before.
The other thing that happened is that, starting in 2008, the government started putting FBI in a more central role in this process, meaning FBI’s promiscuous sharing rules would apply to anything FBI touched first. That came with two benefits. First, the FBI can do back door searches on 702 data (NSA’s ability to do so is much more limited), and it does so even at the assessment level. This basically puts data collected under the guise of foreign intelligence at the fingertips of FBI Agents even when they’re just searching for informants or doing other pre-investigative things.
In addition, the minimization procedures permit the FBI (and CIA) to copy entire metadata databases.
FBI can “transfer some or all such metadata to other FBI electronic and data storage systems,” which seems to broaden access to it still further.
Users authorized to access FBI electronic and data storage systems that contain “metadata” may query such systems to find, extract, and analyze “metadata” pertaining to communications. The FBI may also use such metadata to analyze communications and may upload or transfer some or all such metadata to other FBI electronic and data storage systems for authorized foreign intelligence or law enforcement purposes.
In this same passage, the definition of metadata is curious.
For purposes of these procedures, “metadata” is dialing, routing, addressing, or signaling information associated with a communication, but does not include information concerning the substance, purport, or meaning of the communication.
I assume this uses the very broad definition John Bates rubber stamped in 2010, which included some kinds of content. Furthermore, the SMPs elsewhere tell us they’re pulling photographs (and, presumably, videos and the like). All those will also have metadata which, so long as it is not the meaning of a communication, presumably could be tracked as well (and I’m very curious whether FBI treats location data as metadata as well).
Whereas under the old Internet dragnet the data had to stay at NSA, this basically lets FBI copy entire swaths of metadata and integrate it into their existing databases. And, as noted, the definition of metadata may well be broader than even the broadened categories approved by John Bates in 2010 when he restarted the dragnet.
So one big improvement between the old domestic Internet dragnet and SPCMA (and 702 to a lesser degree, and I of course, improvement from a dragnet-loving perspective) is that the government can use it for any foreign intelligence purpose.
At several times during the USA F-ReDux debate, surveillance hawks tried to use the “reform” to expand the acceptable uses of the dragnet. I believe controls on the new system will be looser (especially with regards to emergency searches), but it is, ostensibly at least, limited to counterterrorism.
One way USA F-ReDux will be far more liberal, however, is in dissemination. It’s quite clear that the data returned from queries will go (at least) to FBI, as well as NSA, which means FBI will serve as a means to disseminate it promiscuously from there.
Another thing replacing the Internet dragnet with 702 access does it provide another way to correlate multiple identities, which is critically important when you’re trying to map networks and track all the communication happening within one. Under 702, the government can obtain not just Internet “call records” and the content of that Internet communication from providers, but also the kinds of thing they would obtain with a subpoena (and probably far more). As I’ve shown, here are the kinds of things you’d almost certainly get from Google (because that’s what you get with a few subpoenas) under 702 that you’d have to correlate using algorithms under the old Internet dragnet.
Every single one of these data points provides a potentially new identity that the government can track on, whereas the old dragnet might only provide an email and IP address associated with one communication. The NSA has a great deal of ability to correlate those individual identifiers, but — as I suspect the Paris attack probably shows — that process can be thwarted somewhat by very good operational security (and by using providers, like Telegram, that won’t be as accessible to NSA collection).
This is an area where the new phone dragnet will be significantly better than the existing phone dragnet, which returns IMSI, IMEI, phone number, and a few other identifiers. But under the new system, providers will be asked to identify “connected” identities, which has some limits, but will nonetheless pull some of the same kind of data that would come back in a subpoena.
While replacing the domestic Internet dragnet with SPCMA provides additional data with which to do correlations, much of that might fall under the category of additional functionality. There are two obvious things that distinguish the old Internet dragnet from what NSA can do under SPCMA, though really the possibilities are endless.
The first of those is content scraping. As the Intercept recently described in a piece on the breathtaking extent of metadata collection, the NSA (and GCHQ) will scrape content for metadata, in addition to collecting metadata directly in transit. This will get you to different kinds of connection data. And particularly in the wake of John Bates’ October 3, 2011 opinion on upstream collection, doing so as part of a domestic dragnet would be prohibitive.
In addition, it’s clear that at least some of the experimental implementations on geolocation incorporated SPCMA data.
I’m particularly interested that one of NSA’s pilot co-traveler programs, CHALKFUN, works with SPCMA.
Chalkfun’s Co-Travel analytic computes the date, time, and network location of a mobile phone over a given time period, and then looks for other mobile phones that were seen in the same network locations around a one hour time window. When a selector was seen at the same location (e.g., VLR) during the time window, the algorithm will reduce processing time by choosing a few events to match over the time period. Chalkfun is SPCMA enabled1.
1 (S//SI//REL) SPCMA enables the analytic to chain “from,” “through,” or “to” communications metadata fields without regard to the nationality or location of the communicants, and users may view those same communications metadata fields in an unmasked form. [my emphasis]
Now, aside from what this says about the dragnet database generally (because this makes it clear there is location data in the EO 12333 data available under SPCMA, though that was already clear), it makes it clear there is a way to geolocate US persons — because the entire point of SPCMA is to be able to analyze data including US persons, without even any limits on their location (meaning they could be in the US).
That means, in addition to tracking who emails and talks with whom, SPCMA has permitted (and probably still does) permit NSA to track who is traveling with whom using location data.
Finally, one thing we know SPCMA allows is tracking on cookies. I’m of mixed opinion on whether the domestic Internet ever permitted this, but tracking cookies is not only nice for understanding someone’s browsing history, it’s probably critical for tracking who is hanging out in Internet forums, which is obviously key (or at least used to be) to tracking aspiring terrorists.
Most of these things shouldn’t be available via the new phone dragnet — indeed, the House explicitly prohibited not just the return of location data, but the use of it by providers to do analysis to find new identifiers (though that is something AT&T does now under Hemisphere). But I would suspect NSA either already plans or will decide to use things like Supercookies in the years ahead, and that’s clearly something Verizon, at least, does keep in the course of doing business.
All of which is to say it’s not just that the domestic Internet dragnet wasn’t all that useful in its current form (which is also true of the phone dragnet in its current form now), it’s also that the alternatives provided far more than the domestic Internet did.
Jim Comey recently said he expects to get more information under the new dragnet — and the apparent addition of another provider already suggests that the government will get more kinds of data (including all cell calls) from more kinds of providers (including VOIP). But there are also probably some functionalities that will work far better under the new system. When the hawks say they want a return of the dragnet, they actually want both things: mandates on providers to obtain richer data, but also the inclusion of all Americans.
I’m working on a post responding to this post from Chelsea Manning calling to abolish the FISA Court. Spoiler alert: I largely agree with her, but I think the question is not that simple.
As background to that post, I wanted to shift the focus from a common perception of the FISC — that it is a rubber stamp that approves all requests — to a better measure of the FISC — the multiple ways it has tried to rein in the Executive. I think the FISC has, at times, been better at doing so than often given credit for. But as I’ll show in my larger post, those efforts have had limited success.
The primary tool the FISC uses is in policing the Executive is minimization procedures approved by the court. Royce Lamberth unsuccessfully tried to use minimization procedures to limit the use of FISA-collected data in prosecutions (and also, tools for investigation, such as informants). Reggie Walton was far more successful at using and expanding very detailed limits on the phone — and later, the Internet — dragnet to force the government to stop treating domestically collected dragnet data under its own EO 12333 rules and start treating it under the more stringent FISC-imposed rules. He even shut down the Internet dragnet in fall (probably October 30) 2009 because it did not abide by limits imposed 5 years earlier by Colleen Kollar-Kotelly.
There was also a long-running discussion (that involved several briefs in 2006 and 2009, and a change in FISC procedure in 2010) about what to do with Post Cut Through Dialed Digits (those things you type in after a call or Internet session has been connected) collected under pen registers. It appears that FISC permitted (and probably still permits) the collection of that data under FISA (that was not permitted under Title III pen registers), but required the data get minimized afterwards, and for a period over collected data got sequestered.
Perhaps the most important use of minimization procedures, however, came when Internet companies stopped complying with NSLs requiring data in 2009, forcing the government to use Section 215 orders to obtain the data. By all appearances, the FISC imposed and reviewed compliance of minimization procedures until FBI, more than 7 years after being required to, finally adopted minimization procedures for Section 215. This surely resulted in a lot less innocent person data being collected and retained than under NSL collection. Note that this probably imposed a higher standard of review on this bulky collection of data than what existed at magistrate courts, though some magistrates started trying to impose what are probably similar requirements in 2014.
Such oversight provides one place where USA Freedom Act is a clear regression from what is (today, anyway) in place. Under current rules, when the government submits an application retroactively for an emergency search of the dragnet, the court can require the government to destroy any data that should not have been collected. Under USAF, the Attorney General will police such things under a scheme that does not envision destroying improperly collected data at all, and even invites the parallel construction of it.
The FISC has also had some amount — perhaps significant — success in making the Executive use a more restrictive First Amendment review than it otherwise would have. Kollar-Kotelly independently imposed a First Amendment review on the Internet dragnet in 2004. First Amendment reviews were implicated in the phone dragnet changes Walton pushed in 2009. And it appears that in the government’s first uses of the emergency provision for the phone dragnet, it may have bypassed First Amendment review — at least, that’s the most logical explanation for why FISC explicitly added a First Amendment review to the emergency provision last year. While I can’t prove this with available data, I strongly suspect more stringent First Amendment reviews explain the drop in dragnet searches every time the FISC increased its scrutiny of selectors.
In most FISA surveillance, there is supposed to be a prohibition on targeting someone for their First Amendment protected activities. Yet given the number of times FISC has had to police that, it seems that the Executive uses a much weaker standard of First Amendment review than the FISC. Which should be a particularly big concern for National Security Letters, as they ordinarily get no court review (one of the NSL challenges that has been dismissed seemed to raise First Amendment concerns).
On at least two occasions, the FISC has taken notice of and required briefing after magistrate judges found a practice also used under FISA to require a higher standard of evidence. One was the 2009 PCTDD discussion mentioned above. The other was the use of combined orders to get phone records and location data. And while the latter probably resulted in other ways the Executive could use FISA to obtain location data, it suggests the FISC has paid close attention to issues being debated in magistrate courts (though that may have more to do with the integrity of then National Security Assistant Attorney General David Kris than the FISC itself; I don’t have high confidence it is still happening). To the extent this occurs, it is more likely that FISA practices will all adjust to new standards of technology than traditional courts, given that other magistrates will continue to approve questionable orders and warrants long after a few individually object, and given that an individual objection isn’t always made public.
Finally, the FISC has limited Executive action by limiting the use and dissemination of certain kinds of information. During Stellar Wind, Lamberth and Kollar-Kotelly attempted to limit or at least know which data came from Stellar Wind, thereby limiting its use for further FISA warrants (though it’s not clear how successful that was). The known details of dragnet minimization procedures included limits on dissemination (which were routinely violated until the FISC expanded them).
More recently John Bates twice pointed to FISA Section 1809(a)(2) to limit the government’s use of data collected outside of legal guidelines. He did so first in 2010 when he limited the government’s use of illegally collected Internet metadata. He used it again in 2011 when he used it to limit the government’s access to illegally collected upstream content. However, I think it likely that after both instances, the NSA took its toys and went elsewhere for part of the relevant collection, in the first case to SPCMA analysis on EO 12333 collected Internet metadata, and in the second to CISA (though just for cyber applications). So long as the FISC unquestioningly accepts EO 12333 evidence to support individual warrants and programmatic certificates, the government can always move collection away from FISC review.
Moreover, with USAF, Congress partly eliminated this tool as a retroactive control on upstream collection; it authorized the use of data collected improperly if the FISC subsequently approved retention of it under new minimization procedures.
These tools have been of varying degrees of usefulness. But FISC has tried to wield them, often in places where all but a few Title III courts were not making similar efforts. Indeed, there are a few collection practices where the FISC probably imposed a higher standard than TIII courts, and probably many more where FISC review reined in collection that didn’t have such review.
I’ve been wracking my brain to understand why the Intel Community has been pushing CISA so aggressively.
I get why the Chamber of Commerce is pushing it: because it sets up a regime under which businesses will get broad regulatory immunity in exchange for voluntarily sharing their customers’ data, even if they’re utterly negligent from a security standpoint, while also making it less likely that information their customers could use to sue them would become public. For the companies, it’s about sharply curtailing the risk of (charitably) having imperfect network security or (more realistically, in some cases) being outright negligent. CISA will minimize some of the business costs of operating in an insecure environment.
But why — given that it makes it more likely businesses will wallow in negligence — is the IC so determined to have it, especially when generalized sharing of cyber threat signatures has proven ineffective in preventing attacks, and when there are far more urgent things the IC should be doing to protect themselves and the country?
Richard Burr and Dianne Feinstein’s move the other day to — in the guise of ensuring DHS get to continue to scrub data on intake, instead give the rest of the IC veto power over that scrub (which almost certainly means the bill is substantially a means of eliminating the privacy role DHS currently plays) — leads me to believe the IC plans to use this as they might have used (or might be using) a cyber certification under upstream 702.
Since NYT and ProPublica caught up to my much earlier reporting on the use of upstream 702 for cyber, people have long assumed that CISA would work with upstream 702 authority to magnify the way upstream 702 works. Jonathan Mayer described how this might work.
This understanding of the NSA’s domestic cybersecurity authority leads to, in my view, a more persuasive set of privacy objections. Information sharing legislation would create a concerning surveillance dividend for the agency.
Because this flow of information is indirect, it prevents businesses from acting as privacy gatekeepers. Even if firms carefully screen personal information out of their threat reports, the NSA can nevertheless intercept that information on the Internet backbone.
Note that Mayer’s model assumes the Googles and Verizons of the world make an effort to strip private information, then NSA would use the signature turned over to the government under CISA to go get the private information just stripped out. But Mayer’s model — and the ProPublica/NYT story — never considered how the 2011 John Bates ruling on upstream collection might hinder that model, particularly as it pertains to domestically collected data.
As I laid out back in June, NSA’s optimistic predictions they’d soon get an upstream 702 certificate for cyber came in the wake of John Bates’ October 3, 2011 ruling that the NSA had illegally collected US person data. Of crucial importance, Bates judged that data obtained in response to a particular selector was intentionally, not incidentally, collected (even though the IC and its overseers like to falsely claim otherwise), even data that just happened to be collected in the same transaction. Crucially, pointing back to his July 2010 opinion on the Internet dragnet, Bates said that disclosing such information, even just to the court or internally, would be a violation of 50 USC 1809(a), which he used as leverage to make the government identify and protect any US person data collected using upstream collection before otherwise using the data. I believe this decision established a precedent for upstream 702 that would make it very difficult for FISC to permit the use of cyber signatures that happened to be collected domestically (which would count as intentional domestic collection) without rigorous minimization procedures.
The government, at a time when it badly wanted a cyber certificate, considered appealing his decision, but ultimately did not. Instead, they destroyed the data they had illegally collected and — in what was almost certainly a related decision — destroyed all the PATRIOT-authorized Internet dragnet data at the same time, December 2011. Bates did permit the government to keep collecting upstream data, but only under more restrictive minimization procedures.
Neither ProPublica/NYT nor Mayer claimed NSA had obtained an upstream cyber certificate (though many other people have assumed it did). We actually don’t know, and the evidence is mixed.
Even as the government was scrambling to implement new upstream minimization procedures to satisfy Bates’ order, NSA had another upstream violation. That might reflect informing Bates, for the first time (there’s no sign they did inform him during the 2011 discussion, though the 2011 minimization procedures may reflect that they already had), they had been using upstream to collect on cyber signatures, or one which might represent some other kind of illegal upstream collection. When the government got Congress to reauthorize FAA that year, it did not inform them they were using or intended to use upstream collection to collect cyber signatures. Significantly, even as Congress began debating FAA, they considered but rejected the first of the predecessor bills to CISA.
My guess is that the FISC did approve cyber collection, but did so with some significant limitations on it, akin to, or perhaps even more restrictive, than the restrictions on multiple communication transactions (MCTs) required in 2011. I say that, in part, because of language in USA F-ReDux (section 301) permitting the government to use information improperly collected under Section 702 if the FISA Court imposed new minimization procedures. While that might have just referred back to the hypothetical 2011 example (in which the government had to destroy all the data), I think it as likely the Congress was trying to permit the government to retain data questioned later.
Additionally, nothing in these procedures shall restrict NSA’s ability to conduct vulnerability or network assessments using information acquired pursuant to section 702 of the Act in order to ensure that NSA systems are not or have not been compromised. Notwithstanding any other section in these procedures, information used by NSA to conduct vulnerability or network assessments may be retained for one year solely for that limited purpose. Any information retained for this purpose may be disseminated only in accordance with the applicable provisions of these procedures.
That is, the FISC approved new procedures that permit the retention of vulnerability information for use domestically, but it placed even more restrictions on it (retention for just one year, retention solely for the defense of that agency’s network, which presumably prohibits its use for criminal prosecution, not to mention its dissemination to other agencies, other governments, and corporations) than it had on MCTs in 2011.
To be sure, there is language in both 2011 and 2014 NSA MPs that permits the agency to retain and disseminate domestic communications if it is necessary to understand a communications security vulnerability.
the communication is reasonably believed to contain technical data base information, as defined in Section 2(i), or information necessary to understand or assess a communications security vulnerability. Such communication may be provided to the FBI and/or disseminated to other elements of the United States Government. Such communications may be retained for a period sufficient to allow a thorough exploitation and to permit access to data that are, or are reasonably believed likely to become, relevant to a current or future foreign intelligence requirement. Sufficient duration may vary with the nature of the exploitation.
But at least on its face, that language is about retaining information to exploit (offensively) a communications vulnerability. Whereas the more recent language — which is far more restrictive — appears to address retention and use of data for defensive purposes.
The 2011 ruling strongly suggested that FISC would interpret Section 702 to prohibit much of what Mayer envisioned in his model. And the addition to the 2014 minimization procedures leads me to believe FISC did approve very limited use of Section 702 for cyber security, but with such significant limitations on it (again, presumably stemming from 50 USC 1809(a)’s prohibition on disclosing data intentionally collected domestically) that the IC wanted to find another way. In other words, I suspect NSA (and FBI, which was working closely with NSA to get such a certificate in 2012) got their cyber certificate, only to discover it didn’t legally permit them to do what they wanted to do.
And while I’m not certain, I believe that in ensuring that DHS’ scrubs get dismantled, CISA gives the IC a way to do what it would have liked to with a FISA 702 cyber certificate.
Let’s go back to Mayer’s model of what the IC would probably like to do: A private company finds a threat, removes private data, leaving just a selector, after which NSA deploys the selector on backbone traffic, which then reproduces the private data, presumably on whatever parts of the Internet backbone NSA has access to via its upstream selection (which is understood to be infrastructure owned by the telecoms).
But in fact, Step 4 of Mayer’s model — NSA deploys the signature as a selector on the Internet backbone — is not done by the NSA. It is done by the telecoms (that’s the Section 702 cooperation part). So his model would really be private business > DHS > NSA > private business > NSA > treatment under NSA’s minimization procedures if the data were handled under upstream 702. Ultimately, the backbone operator is still going to be the one scanning the Internet for more instances of that selector; the question is just how much data gets sucked in with it and what the government can do once it gets it.
And that’s important because CISA codifies private companies’ authority to do that scan.
For all the discussion of CISA and its definition, there has been little discussion of what might happen at the private entities. But the bill affirmatively authorizes private entities to monitor their systems, broadly defined, for cybersecurity purposes.
(a) AUTHORIZATION FOR MONITORING.—
(1) IN GENERAL.—Notwithstanding any other provision of law, a private entity may, for cybersecurity purposes, monitor—
(A) an information system of such private entity;
(B) an information system of another entity, upon the authorization and written consent of such other entity;
(C) an information system of a Federal entity, upon the authorization and written consent of an authorized representative of the Federal entity; and
(D) information that is stored on, processed by, or transiting an information system monitored by the private entity under this paragraph.
(2) CONSTRUCTION.—Nothing in this subsection shall be construed—
(A) to authorize the monitoring of an information system, or the use of any information obtained through such monitoring, other than as provided in this title; or
(B) to limit otherwise lawful activity.
Defining monitor this way:
(14) MONITOR.—The term ‘‘monitor’’ means to acquire, identify, or scan, or to possess, information that is stored on, processed by, or transiting an information system.
That is, CISA affirmatively permits private companies to scan, identify, and possess cybersecurity threat information transiting or stored on their systems. It permits private companies to conduct precisely the same kinds of scans the government currently obligates telecoms to do under upstream 702, including data both transiting their systems (which for the telecoms would be transiting their backbone) or stored in its systems (so cloud storage). To be sure, big telecom and Internet companies do that anyway for their own protection, though this bill may extend the authority into cloud servers and competing tech company content that transits the telecom backbone. And it specifically does so in anticipation of sharing the results with the government, with very limited requirement to scrub the data beforehand.
Thus, CISA permits the telecoms to do the kinds of scans they currently do for foreign intelligence purposes for cybersecurity purposes in ways that (unlike the upstream 702 usage we know about) would not be required to have a foreign nexus. CISA permits the people currently scanning the backbone to continue to do so, only it can be turned over to and used by the government without consideration of whether the signature has a foreign tie or not. Unlike FISA, CISA permits the government to collect entirely domestic data.
Of course, there’s no requirement that the telecoms scan for every signature the government shares with it and share the results with the government. Though both Verizon and AT&T have a significant chunk of federal business — which just got put out for rebid on a contract that will amount to $50 billion — and they surely would be asked to scan the networks supporting federal traffic for those signatures (remember, this entire model of scanning domestic backbone traffic got implicated in Qwest losing a federal bid which led to Joe Nacchio’s prosecution), so they’ll be scanning some part of the networks they operate with the signatures. CISA just makes it clear they can also scan their non-federal backbone as well if they want to. And the telecoms are outspoken supporters of CISA, so we should presume they plan to share promiscuously under this bill.
Assuming they do so, CISA offers several more improvements over FISA.
First — perhaps most important for the government — there are no pesky judges. The FISC gets a lot of shit for being a rubber stamp, but for years judges have tried to keep the government operating in the vicinity of the Fourth Amendment through its role in reviewing minimization procedures. Even John Bates, who was largely a pushover for the IC, succeeded in getting the government to agree that it can’t disseminate domestic data that it intentionally collected. And if I’m right that the FISC gave the government a cyber certificate but sharply limited how it could use that data, then it did so on precisely this issue. Significantly, CISA continues a trend we already saw in USA F-ReDux, wherein the Attorney General gets to decide whether privacy procedures (no longer named minimization procedures!) are adequate, rather than a judge. Equally significant, while CISA permits the use of CISA-collected data for a range of prosecutions, unlike FISA, it requires no notice to defendants of where the government obtained that data.
In lieu of judges, CISA envisions PCLOB and Inspectors General conducting the oversight (as well as audits being possible though not mandated). As I’ll show in a follow-up post, there are some telling things left out of those reviews. Plus, the history of DOJ’s Inspector General’s efforts to exercise oversight over such activities offers little hope these entities, no matter how well-intentioned, will be able to restrain any problematic practices. After all, DOJ’s IG called out the FBI in 2008 for not complying with a 2006 PATRIOT Act Reauthorization requirement to have minimization procedures specific to Section 215, but it took until 2013, with three years of intercession from FISC and leaks from Edward Snowden, before FBI finally complied with that 2006 mandate. And that came before FBI’s current practice of withholding data from its IG and even some information in IG reports from Congress.
In short, given what we know of the IC’s behavior when there was a judge with some leverage over its actions, there is absolutely zero reason to believe that any abuses would be stopped under a system without any judicial oversight. The Executive Branch cannot police itself.
Finally, there’s the question of what happens at DHS. No matter what you think about NSA’s minimization procedures (and they do have flaws), they do ensure that data that comes in through NSA doesn’t get broadly circulated in a way that identifies US persons. The IC has increasingly bypassed this control since 2007 by putting FBI at the front of data collection, which means data can be shared broadly even outside of the government. But FISC never permitted the IC to do this with upstream collection. So any content (metadata was different) on US persons collected under upstream collection would be subjected to minimization procedures.
This CISA model eliminates that control too. After all, CISA, as written, would let FBI and NSA veto any scrub (including of content) at DHS. And incoming data (again, probably including content) would be shared immediately not only with FBI (which has been the vehicle for sharing NSA data broadly) but also Treasury and ODNI, which are both veritable black holes from a due process perspective. And what few protections for US persons are tied to a relevance standard that would be met by virtue of a tie to that selector. Thus, CISA would permit the immediate sharing, with virtually no minimization, of US person content across the government (and from there to private sector and local governments).
I welcome corrections to this model — I presume I’ve overstated how much of an improvement over FISA this program would be. But if this analysis is correct, then CISA would give the IC everything that would have wanted for a cybersecurity certificate under Section 702, with none of the inadequate limits that would have had and may in fact have. CISA would provide an administrative way to spy on US person (domestic) content all without any judicial overview.
All of which brings me back to why the IC wants this this much. In at least one case, the IC did manage to use a combination of upstream and PRISM collection to stop an attempt to steal large amounts of data from a defense contractor. That doesn’t mean it’ll be able to do it at scale, but if by offering various kinds of immunity it can get all backbone providers to play along, it might be able to improve on that performance.
But CISA isn’t so much a cybersecurity bill as it is an Internet domestic spying bill, with permission to spy on a range of nefarious activities in cyberspace, including kiddie porn and IP theft. This bill, because it permits the spying on US person content, may be far more useful for that purpose than preventing actual hacks. That is, it won’t fix the hacking problem (it may make it worse by gutting Federal authority to regulate corporate cyber hygiene). But it will help police other kinds of activity.
If I’m right, the IC’s insistence it needs CISA — in the name of, but not necessarily intending to accomplish — cybersecurity makes more sense.
Update: This post has been tweaked for clarity.
Update, November 5: I should have written this post before I wrote this one. In it, I point to language in the August 26, 2014 Thomas Hogan opinion reflecting earlier approval, at least in the FBI minimization procedures, to share cyber signatures with private entities. The first approval was on September 20, 2012. The FISC approved the version still active in 2014 on August 30, 2013. (See footnote 19.) That certainly suggests FISC approved cyber sharing more broadly than the 2011 opinion might have suggested, though I suspect it still included more restrictions than CISA would. Moreover, if the language only got approved for the FBI minimization procedures, it would apply just to PRISM production, given that the FBI does not (or at least didn’t used to) get unminimized upstream production.
In the series of letters purporting to speak for “the judiciary,” Director of the Administrative Office of US Courts John Bates and (after Duff replaced him) James Duff expressed concern about how a FISC amicus would affect the timeliness of proceedings before the court. Bates worried that any involvement of an amicus would require even more lead time than the current one week requirement in FISC applications. He also worried that the presumption an amicus (and potentially tech experts) would have access to information might set off disputes with the Executive over whether they could really have it. Duff apparently worried that the perception that an amicus would oppose the government would lead the government to delay in handing over materials to the FISC.
Which is why I’m interesting in the briefing order Chief FISC Judge Thomas Hogan, signing for Michael Mosman, issued on Wednesday (see below for a timeline).
Back on September 17, Mosman appointed spook lawyer Preston Burton amicus. As part of that order, he gave the government 4 days to refuse to share information with Burton, but otherwise required Burton receive the application and primary order in this docket.
(Pursuant to 50 U.S.C. § 1803(i)(6)(A)(i), the Court has determined that the government’s application (including exhibits and attachments) and the full, unredacted Primary Order in this docket are relevant to the duties of the amicus. By September 22, 2015, or after receiving confirmation from SEPS that the amicus has received the appropriate clearances and access approvals for such materials, whichever is later, the Clerk of the Court shall make these materials available to the amicus.
Yet even after the almost month long delay in deciding to appoint someone and deciding that someone would be Burton, it still took Mosman two weeks after the date when Burton was supposed to have received the relevant information on this issue before setting deadlines. And in setting his deadlines, Mosman has basically left himself only 2 weeks during which time he will have to to decide the issue and the government will have to prepare to keep or destroy the data in question (in past data destruction efforts it has taken a fairly long time). That could be particularly problematic if Mosman ends up requiring the government to pull the data from EFF’s clients from the data retained under their protection order.
On November 28, the order authorizing the retention of this data expires.
To be fair, Mosman is definitely making a more concerted effort to comply with the appearance if not the intent of USA F-ReDux’s amicus provision than, say, Dennis Saylor (who blew if off entirely). And there may be aspects of this process — and FISC’s presumed effort to start coming up with a panel of amici by November 29 — that will take more time than future instances down the road.
Still, it’s hard to understand the almost 3 week delay in setting a briefing schedule.
Unless the government slow-walked giving even a spook lawyer not explicitly ordered to represent the interests of privacy approval to receive and then a packet of documents to review.
I suspect this represents a stall by the government, not FISC (though again, the month long delay in deciding to appoint an amicus didn’t help things, and FISC’s thus far 4 month delay in picking amici likely doesn’t help either). But whatever the cause of the delay, it may indicate a reluctance on someone’s part to use the amicus as intended.
July 27: ODNI declares that “NSA has determined” that “NSA will allow technical personnel to continue to have access to the historical metadata for an additional three months”
By August 20: Government asks for permission to retain data past November 28 (the government must submit major FISA orders at least a week in advance)
August 27: Mosman approves dragnet order, defers decision on data retention
September 17: Mosman appoints Burton and orders the government to cough up its application and the full order
September 21: Last date by which government can complain about sharing information with Burton
September 22: Date by which Burton must receive application and order
October 7: Mosman sets deadlines
October 29: Deadline for Burton’s first brief
November 6: Deadline for Government response
November 10: Deadline for Burton reply, if any
November 28: Expiration of authorization to retain data