There’s a paper that has been making waves, claiming it has found a formula to debunk conspiracies based on the likelihood if they were real, they would have already been leaked. Never mind that people have already found fault with the math, the study has another glaring flaw. It treats the PRISM program — and not, say, the phone dragnet — as one of its “true” unknown conspiracies.
PRISM — one part of the surveillance program authorized by Section 702 of the FISA Amendments Act — was remarkable in that it was legislated in public. There are certainly parts of Section 702 that were not widely known, such as the details about the “upstream” collection from telecom switches, but even that got explained to us back in 2006 by Mark Klein. There are even details of how the PRISM collection worked — its reliance on network mapping, the full list of participants. There are details that were exposed, such as that the government was doing back door searches on content collected under it, but even those were logical guesses based on the public record of the legislative debates.
Which is why it is so remarkable that — as I noted here and here — House Judiciary Committee Chair Bob Goodlatte has scheduled a classified hearing to cover the program that has been the subject of open hearings going back to at least 2008.
The hearing is taking place as we speak with the following witnesses.
This suggests there is either something about the program we don’t already know, or that the government is asking for changes to the program that would extend beyond the basic concept of spying on foreigners in the US using US provider help.
I guess we’re stuck wildarseguessing what those big new secrets are, given the Intelligence Community’s newfound secrecy about this program.
Some observations about the witnesses. First, between Litt and Evans, these are the lawyers that would oversee the yearly certification applications to FISC. That suggests the government may, in fact, be asking for new authorities or new interpretations of authorities.
Darby would be in charge of the technical side of this program. Since the PRISM as it currently exists is so (technologically) simple, that suggests the new secrets may involve a new application of what the government will request from providers. This might be an expansion of upstream, possibly to bring it closer to XKeyscore deployment overseas, possibly to better exploit Tor. Remember, too, that under USA Freedom Act, Congress authorized the use of data collected improperly, provided that it adheres to the new minimization procedures imposed by the FISC. This was almost certainly another upstream collection, which means there’s likely to be some exotic new upstream application that has caused the government some problems of late.
Note that the sole FBI witness oversees counterterrorism, not cybersecurity. That’s interesting because it would support my suspicions that the government is achieving its cybersecurity collection via other means now. But also that any new programs may be under the counterterrorism function. Remember, the NatSec bosses, including Jim Comey, just went to Silicon Valley to ask for help applying algorithms to identify terrorism content. Remember, too, that such applications would have been useless to prevent the San Bernardino attack if they were focused on the public social media content. So it may be that NSA and FBI want to apply algorithms identifying radicalizers to private content.
Finally, and critically, remember the Apple debate. In a public court case, Apple and the FBI are fighting over whether Apple can be required to decrypt its customers’ smart device communications. The government has argued this is within the legal notion of “assistance to law enforcement.” Apple disagrees. I think it quite possible that the FBI would try to ask for decryption help to be included under the definition of “assistance” under Section 702. Significantly, these witnesses are generally those (including Bob Litt and FBI counterterrorism) who would champion such an interpretation.
Last year, House Homeland Security Chair Michael McCaul offered up his rear-end to be handed back to him in negotiations leading to the passage of OmniCISA on last year’s omnibus. McCaul was probably the only person who could have objected to such a legislative approach because it deprived him of weighing in as a conferee. While he made noise about doing so, ultimately he capitulated and let the bill go through — and be made less privacy protective — as part of the must-pass budget bill.
Which is why I was so amused by McCaul’s op-ed last week, including passage of OmniCISA among the things he has done to make the country more safe from hacks. Here was a guy, holding his rear-end in his hands, plaintively denying that, by claiming that OmniCISA reinforced his turf.
I was adamant that the recently-enacted Cybersecurity Act include key provisions of my legislation H.R. 1731, the National Cybersecurity Protection Advancement Act. With this law, we now have the ability to be more efficient while protecting both our nation’s public and private networks.
With these new cybersecurity authorities signed into law, the Department of Homeland Security (DHS) will become the sole portal for companies to voluntarily share information with the federal government, while preventing the military and NSA from taking on this role in the future.
With this strengthened information-sharing portal, it is critical that we provide incentives to private companies who voluntarily share known cyber threat indicators with DHS. This is why we included liability protections in the new law to ensure all participants are shielded from the reality of unfounded litigation.
While security is vital, privacy must always be a guiding principle. Before companies can share information with the government, the law requires them to review the information and remove any personally identifiable information (PII) unrelated to cyber threats. Furthermore, the law tasks DHS and the Department of Justice (DOJ) to jointly develop the privacy procedures, which will be informed by the robust existing DHS privacy protocols for information sharing.
Given DHS’ clearly defined lead role for cyber information sharing in the Cybersecurity Act of 2015, my Committee and others will hold regular oversight hearings to make certain there is effective implementation of these authorities and to ensure American’s privacy and civil liberties are properly protected.
It is true that under OmniCISA, DHS is currently (that is, on February 1) the sole portal for cyber-sharing. It’s also true that OmniCISA added DHS, along with DOJ, to those in charge of developing privacy protocols. There are also other network defense measures OmniCISA tasked DHS with — though the move of the clearances function, along with the budget OPM had been asking for to do it right but not getting, to DOD earlier in January, the government has apparently adopted a preference for moving its sensitive functions to networks DOD (that is, NSA) will guard rather than DHS. But McCaul’s bold claims really make me wonder about the bureaucratic battles that may well be going on as we speak.
Here’s how I view what actually happened with the passage of OmniCISA. It is heavily influenced by these three Susan Hennessey posts, in which she tried to convince that DHS’ previously existing portal ensured privacy would be protected, but by the end seemed to concede that’s not how it might work out.
Underlying the entire OmniCISA passage is a question: Why was it necessary? Boosters explained that corporations wouldn’t share willingly without all kinds of immunities, which is surely true, but the same boosters never explained why an info-sharing system was so important when experts were saying it was way down the list of things that could make us safer and similar info-sharing has proven not to be a silver bullet. Similarly, boosters did not explain the value of a system that not only did nothing to require cyber information shared with corporations would be used to protect their networks, but by giving them immunity (in final passage) if they did nothing with information and then got pawned, made it less likely they will use the data. Finally, boosters ignored the ways in which OmniCISA not only creates privacy risks, but also expands new potential vectors of attack or counterintelligence collection for our adversaries.
So why was it necessary, especially given the many obvious ways in which it was not optimally designed to encourage monitoring, sharing, and implementation from network owners? Why was it necessary, aside from the fact that our Congress has become completely unable to demand corporations do anything in the national interest and there was urgency to pass something, anything, no matter how stinky?
Indeed, why was legislation doing anything except creating some but not all these immunities necessary if, as former NSA lawyer Hennessey claimed, the portal laid out in OmniCISA in fact got up and running on October 31, between the time CISA passed the Senate and the time it got weakened significantly and rammed through Congress on December 18?
At long last DHS has publically unveiled its new CISA-sanctioned, civil-liberties-intruding, all-your-personal-data-grabbing, information-sharing uber vacuum. Well, actually, it did so three months ago, right around the time these commentators were speculating about what the system would look like. Yet even as the cleverly-labeled OmniCISA passed into law last month, virtually none of the subsequent commentary took account of the small but important fact that the DHS information sharing portal has been up and running for months.
Hennessey appeared to think this argument was very clever, to suggest that “virtually no” privacy advocates (throughout her series she ignored that opposition came from privacy and security advocates) had talked about DHS’ existing portal. She must not have Googled that claim, because if she had, it would have become clear that privacy (and security) people had discussed DHS’ portal back in August, before the Senate finalized CISA.
Back in July, Al Franken took the comedic step of sending a letter to DHS basically asking, “Say, you’re already running the portal that is being legislated in CISA. What do you think of the legislation in its current form?” And DHS wrote back and noted that the portal being laid out in CISA (and the other sharing permitted under the bill) was different in several key ways from what it was already implementing.
Its concerns included:
As noted, that exchange took place in July (most responses to it appeared in August). While a number of amendments addressing DHS’ concerns were proposed in the Senate, I’m aware of only two that got integrated into the bill that passed: an Einstein (that is, federal network monitoring) related request, and DHS got added — along with the Attorney General — in the rules-making function. McCaul mentioned both of those things, along with hailing the “more efficient” sharing that may refer to the real-time versus almost real-time sharing, in his op-ed.
Not only didn’t the Senate respond to most of the concerns DHS raised, as I noted in another post on the portal, the Senate also gave other agencies veto power over DHS’ scrub (this was sort of the quid pro quo of including DHS in the rule-making process, and it was how Ranking Member on the Senate Homeland Security Committee, Tom Carper, got co-opted on the bill), which exacerbated the real versus almost real-time sharing problem.
All that happened by October 27, days before the portal based on Obama’s executive order got fully rolled out. The Senate literally passed changes to the portal as DHS was running it days before it went into full operation.
Meanwhile, one more thing happened: as mandated by the Executive Order underlying the DHS portal, the Privacy and Civil Liberties Oversight Board helped DHS set up its privacy measures. This is, as I understand it, the report Hennessey points to in pointing to all the privacy protections that will make OmniCISA’s elimination of warrant requirements safe.
Helpfully, DHS has released its Privacy Impact Assessment of the AIS portal which provides important technical and structural context. To summarize, the AIS portal ingests and disseminates indicators using—acronym alert!—the Structured Threat Information eXchange (STIX) and Trusted Automated eXchange of Indicator Information (TAXII). Generally speaking, STIX is a standardized language for reporting threat information and TAXII is a standardized method of communicating that information. The technology has many interesting elements worth exploring, but the critical point for legal and privacy analysis is that by setting the STIX TAXII fields in the portal, DHS controls exactly which information can be submitted to the government. If an entity attempts to share information not within the designated portal fields, the data is automatically deleted before reaching DHS.
In other words, the scenario is precisely the reverse of what Hennessey describes: DHS set up a portal, and then the Senate tried to change it in many ways that DHS said, before passage, would weaken the privacy protections in place.
Now, Hennessey does acknowledge some of the ways OmniCISA weakened privacy provisions that were in DHS’ portal. She notes, for example, that the Senate added a veto on DHS’ privacy scrubs, but suggests that, because DHS controls the technical parameters, it will be able to overcome this veto.
At first read, this language would appear to give other federal agencies, including DOD and ODNI, veto power over any privacy protections DHS is unable to automate in real-time. That may be true, but under the statute and in practice DHS controls AIS; specifically, it sets the STIX TAXXI fields. Therefore, DHS holds the ultimate trump card because if that agency believes additional privacy protections that delay real-time receipt are required and is unable to convince fellow federal entities, then DHS is empowered to simply refuse to take in the information in the first place. This operates as a rather elegant check and balance system. DHS cannot arbitrarily impose delays, because it must obtain the consent of other agencies, if other agencies are not reasonable DHS can cut off the information, but DHS must be judicious in exercising that option because it also loses the value of the data in question.
This seems to flip Youngstown on its head, suggesting the characteristics of the portal laid out in an executive order and changed in legislation take precedence over the legislation.
Moreover, while Hennessey does discuss the threat of the other portal — one of the features added in the OmniCISA round with no debate — she puts it in a different post from her discussion of DHS’ purported control over technical intake data (and somehow portrays it as having “emerged from conference with the new possibility of an alternative portal” even though no actual conference took place, which is why McCaul is stuck writing plaintive op-eds while holding his rear-end). This means that, after writing a post talking about how DHS would have the final say on protecting privacy by controlling intake, Hennessey wrote another post that suggested DHS would have to “get it right” or the President would order up a second portal without all the privacy protections that DHS’ portal had in the first place (and which it had already said would be weakened by CISA).
Such a portal would, of course, be subject to all statutory limitations and obligations, including codified privacy protections. But the devil is in the details here; specifically, the details coded into the sharing portal itself. CISA does not obligate that the technical specifications for a future portal be as protective as AIS. This means that it is not just the federal government and private companies who have a stake in DHS getting it right, but privacy advocates as well. The balance of CISA is indeed delicate.
Elsewhere, Hennessey admits that many in government think DHS is a basket-case agency (an opinion I’m not necessarily in disagreement with). So it’s unclear how DHS would retain any leverage over the veto given that exercising such leverage would result in DHS losing this portfolio altogether. There was a portal designed with privacy protections, CISA undermined those protections, and then OmniCISA created yet more bureaucratic leverage that would force DHS to eliminate its privacy protections to keep the overall portfolio.
Plus, OmniCISA did two more things. First, as noted, back in July DHS said it would need 180 days to fully tweak its existing portal to match the one ordered up in CISA. CISA and OmniCISA didn’t care: the bill and the law retained the 90 day turnaround. But in addition, OmniCISA required DHS and the Attorney General develop their interim set of guidelines within 60 days (which as it happened included the Christmas holiday). That 60 deadline is around February 16. The President can’t declare the need for a second portal until after the DHS one gets certified, which has a 90 day deadline (so March 18). But he can give a 30 day notice that’s going to happen beforehand. In other words, the President can determine, after seeing what DHS and AG Lynch come up with in a few weeks, that that’s going to be too privacy restrictive and tell Congress FBI needs to have its own portal, something that did not and would not have passed under regular legislative order.
Finally, as I noted, PCLOB had been involved in setting up the privacy parameters for DHS’ portal, including the report that Hennessey points to as the basis for comfort about OmniCISA’s privacy risk. In final passage of OmniCISA, a PCLOB review of the privacy impact of OmniCISA, which had been included in every single version of the bill, got eliminated.
Hennssey’s seeming admission that’s the eventual likelihood appears over the course of her posts as well. In her first post, she claims,
From a practical standpoint, the government does not want any information—PII or otherwise—that is not necessary to describe or identify a threat. Such information is operationally useless and costly to store and properly handle.
But in explaining the reason for a second portal, she notes that there is (at least) one agency included in OmniCISA sharing that does want more information: FBI.
[T]here are those who fear that awarding liability protection exclusively to sharing through DHS might result in the FBI not getting information critical to the investigation of computer crimes. The merits of the argument are contested but the overall intention of CISA is certainly not to result in the FBI getting less cyber threat information. Hence, the fix.
AIS is not configured to receive the full scope of cyber threat information that might be necessary to the investigation of a crime. And while CISA expressly permits sharing with law enforcement – consistent with all applicable laws – for the purposes of opening an investigation, the worry here is that companies that are the victims of hacks will share those threat indicators accepted by AIS, but not undertake additional efforts to lawfully share threat information with an FBI field office in order to actually investigate the crime.
That is, having decided that the existing portal wasn’t good enough because it didn’t offer enough immunities (and because it was too privacy protective), the handful of mostly Republican leaders negotiating OmniCISA outside of normal debate then created the possibility of extending those protections to a completely different kind of information sharing, that of content shared for law enforcement.
In her final post, Hennessey suggests some commentators (hi!!) who might be concerned about FBI’s ability to offer immunity for those who share domestically collected content willingly are “conspiracy-minded” even while she reverts to offering solace in the DHS portal protections that, her series demonstrates, are at great risk of bureaucratic bypass.
But these laws encompass a broad range of computer crimes, fraud, and economic espionage – most controversially the Computer Fraud and Abuse Act (CFAA). Here the technical constraints of the AIS system cut both ways. On one hand, the scope of cyber threat indicators shared through the portal significantly undercuts claims CISA is a mass surveillance bill. Bluntly stated, the information at issue is not of all that much use for the purposes certain privacy-minded – and conspiracy-minded, for that matter – critics allege. Still, the government presumably anticipates using this information in at least some investigations and prosecutions. And not only does CISA seek to move more information to the government – a specific and limited type of information, but more nonetheless – but it also authorizes at least some amount of new sharing.
That question ultimately resolves to which STIX TAXII fields DHS decides to open or shut in the portal. So as CISA moves towards implementation, the portal fields – and the privacy interests at stake in the actual information being shared – are where civil liberties talk should start.
To some degree, Hennessey’s ultimate conclusion is one area where privacy (and security) advocates might weigh in. When the government provides Congress the interim guidelines sometime this month, privacy (and security) advocates might have an opportunity to weigh in, if they get a copy of the guidelines. But only the final guidelines are required to be made public.
And by then, it would be too late. Through a series of legislative tactics, some involving actual debate but some of the most important simply slapped onto a must-pass legislation, Congress has authorized the President to let the FBI, effectively, obtain US person content pertaining to Internet-based crimes without a warrant. Even if President Obama chooses not to use that authorization (or obtains enough concessions from DHS not to have to directly), President Trump may not exercise that discretion.
Maybe I am being conspiratorial in watching the legislative changes made to a bill (and to an existing portal) and, absent any other logical explanation for them, concluding those changes are designed to do what they look like they’re designed to do. But it turns out privacy (and security) advocates weren’t conspiratorial enough to prevent this from happening before it was too late.
On May 18, 2011, 48 members of the House (mostly Republicans, but also including MI’s Hansen Clarke) attended a closed briefing given by FBI Director Robert Mueller and General Counsel Valerie Caproni on the USA PATRIOT Act authorities up for reauthorization. The hearing would serve as the sole opportunity for newly elected members to learn about the phone and Internet dragnets conducted under the PATRIOT Act, given Mike Rogers’ decision not to distribute the letter provided by DOJ to inform members on the secret dragnets they were about to reauthorize.
During the hearing, someone asked,
Russ Feingold said that Section 215 authorities have been abused. How does the FBI respond to that accusation?
One of the briefers — the summary released under FOIA does not say who — responded,
To the FBI’s knowledge, those authorities have not been abused.
As a reminder, hearing witness Robert Mueller had to write and sign a declaration for the FISC two years earlier to justify resuming full authorization for the phone dragnet because, as Judge Reggie Walton had discovered, the NSA had conducted “daily violations of the minimization procedures” for over two years. “The minimization procedures proposed by the government in each successive application and approved and adopted as binding by the orders of the FISC have been so frequently and systemically violated that it can fairly be said that this critical element of the overall BR regime has never functioned effectively,” Walton wrote in March 2009.
Now, I can imagine that whichever FBI witness claimed the FBI didn’t know about any “abuses” rationalized the answer to him or herself using the same claim the government has repeatedly made — that these were not willful abuses. But Walton stated then — and more evidence released since has made clear he was right since — that the government simply chose to subject the vast amount of US person data collected under the PATRIOT Act to EO 12333 standards, not more stringent PATRIOT Act ones. That is, the NSA, operating under FBI authorizations, made a willful choice to ignore the minimization procedures imposed by the 2006 reauthorization of the Act.
Whoever answered that question in 2011 lied, and lied all the more egregiously given that the questioner had no way of phrasing it to get an honest answer about violations of minimization procedures.
Which is why the House Judiciary Committee should pointedly refuse to permit the Intelligence Committee to conduct another such closed briefing, as they plan to do on Section 702 on February 2. Holding a hearing in secret permits the IC to lie to Congress, not to mention disinform some members in a venue where their colleagues can not correct the record (as Feingold might have done in 2011 had he learned what the FBI witnesses said in that briefing).
I mean, maybe HJC Chair Bob Goodlatte wants to be lied to? Otherwise, there’s no sound explanation for scheduling this entire hearing in closed session.
In a superb column today, Alvaro Bedoya recalls the long, consistent history during which people of color and other minorities, including Martin Luther King, Jr., were targeted in the name of national security.
The FBI’s violations against King were undeniably tinged by what historian David Garrow has called “an organizational culture of like-minded white men.” But as Garrow and others have shown, the FBI’s initial wiretap requests—and then–Attorney General Robert Kennedy’s approval of them—were driven by a suspected tie between King and the Communist Party. It wasn’t just King; Cesar Chavez, the labor and civil rights leader, was tracked for years as a result of vague, confidential tips about “a communist background,” as were many others.
Many people know that during World War II, innocent Americans of Japanese descent were surveilled and detained in internment camps. Fewer people know that in the wake of World War I, President Woodrow Wilson openly feared that black servicemen returning from Europe would become “the greatest medium in conveying Bolshevism to America.” Around the same time, the Military Intelligence Division created a special “Negro Subversion” section devoted to spying on black Americans. Near the top of its list was W.E.B. DuBois, a “rank Socialist” whom they tracked in Paris for fear he would “attempt to introduce socialist tendencies at the Peace Conference.”
I think Bedoya, as many people do, gives FBI Director Jim Comey a big pass on surveillance due to the Director’s stunt of having agents-in-training study what the Bureau did to King. I have written about how Comey’s claim to caution in the face of the MLK example don’t hold up to the Bureau’s current, known activities.
Comey engages in similar obfuscation when he points to FBI’s treatment of Martin Luther King Jr., whose treatment at the hands of the FBI he holds up to FBI Agents as a warning. The FBI Director describes the unlimited amount of surveillance the Bureau subjected King to based solely on the signature of Hoover and the Attorney General “Open-ended. No time limit. No space restriction. No review. No oversight.” While it is true that the FBI now gets court approval to track civil rights leaders, they do track them, especially in the Muslim community. And without oversight, the FBI can and does infiltrate houses of worship with informants, as they did with African-American churches during the Civil Rights movement. FBI can obtain phone and Internet metadata records without judicial oversight using National Security Letters — which they still can’t count accurately to fulfill congressionally mandated reporting. The FBI has many tools that evade the kind of oversight Comey described, and because of technology many of them are far more powerful than the tools wielded against Dr. King.
But I’m particularly interested in Bedoya’s reminder that the government targeted African Americans for surveillance as subversives in the wake of World War I.
The government’s practice of targeting specific kinds of people, often people of color, as subversives continued, after all. It’s something J. Edgar Hoover continued throughout his life, keeping a list of people to be rounded up if anything happened.
I’ve been thinking about that practice as I’ve been trying to explain, even to civil liberties supporters, why the current 2-degree targeted dragnet is still too invasive of privacy. We’ve been having this discussion for 2.5 years, and yet still most people don’t care that completely innocent people 2 degrees — 3, until 2014 — away from someone the government has a traffic-stop level of suspicion over will be subjected to the NSA’s “full analytic tradecraft.”
The discussion of a Subversives List makes me think of this article from 2007 (which I first wrote about here and here). The story explains that the thing that really freaked out the hospital “heroes” in 2004 was not the Internet dragnet itself, but instead the deployment of Stellar Wind against Main Core, which appears to be another name for this Subversives List.
While Comey, who left the Department of Justice in 2005, has steadfastly refused to comment further on the matter, a number of former government employees and intelligence sources with independent knowledge of domestic surveillance operations claim the program that caused the flap between Comey and the White House was related to a database of Americans who might be considered potential threats in the event of a national emergency. Sources familiar with the program say that the government’s data gathering has been overzealous and probably conducted in violation of federal law and the protection from unreasonable search and seizure guaranteed by the Fourth Amendment.
According to a senior government official who served with high-level security clearances in five administrations, “There exists a database of Americans, who, often for the slightest and most trivial reason, are considered unfriendly, and who, in a time of panic, might be incarcerated. The database can identify and locate perceived ‘enemies of the state’ almost instantaneously.” He and other sources tell Radar that the database is sometimes referred to by the code name Main Core. One knowledgeable source claims that 8 million Americans are now listed in Main Core as potentially suspect. In the event of a national emergency, these people could be subject to everything from heightened surveillance and tracking to direct questioning and possibly even detention.
Another well-informed source—a former military operative regularly briefed by members of the intelligence community—says this particular program has roots going back at least to the 1980s and was set up with help from the Defense Intelligence Agency. He has been told that the program utilizes software that makes predictive judgments of targets’ behavior and tracks their circle of associations with “social network analysis” and artificial intelligence modeling tools.
“The more data you have on a particular target, the better [the software] can predict what the target will do, where the target will go, who it will turn to for help,” he says. “Main Core is the table of contents for all the illegal information that the U.S. government has [compiled] on specific targets.” An intelligence expert who has been briefed by high-level contacts in the Department of Homeland Security confirms that a database of this sort exists, but adds that “it is less a mega-database than a way to search numerous other agency databases at the same time.”
The following information seems to be fair game for collection without a warrant: the e-mail addresses you send to and receive from, and the subject lines of those messages; the phone numbers you dial, the numbers that dial in to your line, and the durations of the calls; the Internet sites you visit and the keywords in your Web searches; the destinations of the airline tickets you buy; the amounts and locations of your ATM withdrawals; and the goods and services you purchase on credit cards. All of this information is archived on government supercomputers and, according to sources, also fed into the Main Core database.
Main Core also allegedly draws on four smaller databases that, in turn, cull from federal, state, and local “intelligence” reports; print and broadcast media; financial records; “commercial databases”; and unidentified “private sector entities.” Additional information comes from a database known as the Terrorist Identities Datamart Environment, which generates watch lists from the Office of the Director of National Intelligence for use by airlines, law enforcement, and border posts. According to the Washington Post, the Terrorist Identities list has quadrupled in size between 2003 and 2007 to include about 435,000 names. The FBI’s Terrorist Screening Center border crossing list, which listed 755,000 persons as of fall 2007, grows by 200,000 names a year. A former NSA officer tells Radar that the Treasury Department’s Financial Crimes Enforcement Network, using an electronic-funds transfer surveillance program, also contributes data to Main Core, as does a Pentagon program that was created in 2002 to monitor anti-war protestors and environmental activists such as Greenpeace.
Given what we now know about the dragnet, this article is at once less shocking and more so. Much of the information included — phone records and emails — as well as the scale of the known lists — such as the No Fly List — are all known. Others, such as credit card purchases, aren’t included in what we know about the dragnet, though we have suspected. The purported inclusion of peace protestors, in what appears to be a reference to CIFA, is something I’ll return to.
Mostly, though, this article takes the generally now-known scope of the dragnet and claim that it serves as the function that those Subversives lists from days past have. As such (and assuming it is true in general outline, and I have significant reason to believe it is) it does two things for our understanding. First, it illustrates what I have tried to in the past, what it means to be exposed to the full complement of NSA’s analytical tradecraft. But it also reframes what our understanding of what 2-degree of suspicion from a traffic stop means.
Whether or not this Main Core description is accurate, it invites us to think of this 2-degree dragnet as a nomination process to be on the Subversives list. Unlike in Hoover’s day, when someone had to keep up a deck of index cards, here it’s one interlocking set of data, all coded to serve both as a list and a profiling system for anyone on that list.
To the extent that this dragnet still exists (or has been magnified with the rollout of XKeyscore), and it absolutely does for Muslims 2 degrees from a terrorist suspect, this is what the dragnet is all about: getting you on that list, which serves as a magnet for all the rest of your information to be sucked in and retained, so that if the government ever feels like it has to start cracking down on dissidents, it has that list, and a ton of demographic data, ready at had.
Update: See this Global Research post on COG programs.
“In these technical investigations, people think they are too good to do the stupid old-school stuff. But I’m like, ‘Well, that stuff still works.’ ”
The NYT got this and many other direct quotes from IRS agent Gary Alford for a complimentary profile of him that ran on Christmas day. According to the story, Alford IDed Ross Ulbricht as a possible suspect for the Dread Pirate Roberts — the operator of the Dark Web site Silk Road — in early June 2013, but it took until September for Alford to get the prosecutor and DEA and FBI Agents working the case to listen to him. The profile claims Alford’s tip was “crucial,” though a typo suggests NYT editors couldn’t decide whether it was the crucial tip or just crucial.
In his case, though, the information he had was the crucial [sic] to solving one of the most vexing criminal cases of the last few years.
On its face, the story (and Alford’s quote) suggests the FBI is so entranced with its hacking ability that it has neglected very, very basic investigative approaches like Google searches. Indeed, if the story is true, it serves as proof that encryption and anonymity don’t thwart FBI investigations as much as Jim Comey would like us to believe when he argues the Bureau needs to back door all our communications.
But I don’t think the story tells the complete truth about the Silk Road investigation. I say that, first of all, because of the timing of Alford’s efforts to get others to further investigate Ulbricht. As noted, the story describes Alford IDing Ulbricht as a potential suspect in early June 2013, after which he put Ulbricht’s name in a DEA database of potential suspects, which presumably should have alerted anyone else on the team that US citizen Ross Ulbricht was a potential suspect in the investigation.
Mr. Alford’s preferred tool was Google. He used the advanced search option to look for material posted within specific date ranges. That brought him, during the last weekend of May 2013, to a chat room posting made just before Silk Road had gone online, in early 2011, by someone with the screen name “altoid.”
“Has anyone seen Silk Road yet?” altoid asked. “It’s kind of like an anonymous Amazon.com.”
The early date of the posting suggested that altoid might have inside knowledge about Silk Road.
During the first weekend of June 2013, Mr. Alford went through everything altoid had written, the online equivalent of sifting through trash cans near the scene of a crime. Mr. Alford eventually turned up a message that altoid had apparently deleted — but that had been preserved in the response of another user.
In that post, altoid asked for some programming help and gave his email address: [email protected]. Doing a Google search for Ross Ulbricht, Mr. Alford found a young man from Texas who, just like Dread Pirate Roberts, admired the free-market economist Ludwig von Mises and the libertarian politician Ron Paul — the first of many striking parallels Mr. Alford discovered that weekend.
When Mr. Alford took his findings to his supervisors and failed to generate any interest, he initially assumed that other agents had already found Mr. Ulbricht and ruled him out.
But he continued accumulating evidence, which emboldened Mr. Alford to put Mr. Ulbricht’s name on the D.E.A. database of potential suspects, next to the aliases altoid and Dread Pirate Roberts.
At the same time, though, Mr. Alford realized that he was not being told by the prosecutors about other significant developments in the case — a reminder, to Mr. Alford, of the lower status that the I.R.S. had in the eyes of other agencies. And when Mr. Alford tried to get more resources to track down Mr. Ulbricht, he wasn’t able to get the surveillance and the subpoenas he wanted.
Alford went to the FBI and DOJ with Ulbricht’s ID in June 2013, but FBI and DOJ refused to issue even subpoenas, much less surveil Ulbricht.
But over the subsequent months, Alford continued to investigate. In “early September” he had a colleague do another search on Ulbricht, which revealed he had been interviewed by Homeland Security in July 2013 for obtaining fake IDs.
In early September, he asked a colleague to run another background check on Mr. Ulbricht, in case he had missed something.
The colleague typed in the name and immediately looked up from her computer: “Hey, there is a case on this guy from July.”
Agents with Homeland Security had seized a package with nine fake IDs at the Canadian border, addressed to Mr. Ulbricht’s apartment in San Francisco. When the agents visited the apartment in mid-July, Mr. Ulbricht answered the door, and the agents identified him as the face on the IDs, without having any idea of his potential links to Silk Road.
When Alford told prosecutor Serrin Turner of the connection (again, this is September 2013), the AUSA finally did his own search in yet another database, the story claims, only to discover Ulbricht lived in the immediate vicinity of where Dread Pirate Roberts was accessing Silk Road. And that led the Feds to bust Ulbricht.
I find the story — the claim that without Alford’s Google searches, FBI did not and would not have IDed Ulbricht — suspect for two reasons.
First, early June is the date that FBI Agent Christopher Tarbell’s declaration showed (but did not claim) FBI first hacked Silk Road. That early June date was itself suspect because Tarbell’s declaration really showed data from as early as February 2013 (which is, incidentally, when Alford was first assigned to the team). In other words, while it still seems likely FBI was always lying about when it hacked into Silk Road, the coincidence between when Alford says he went to DOJ and the FBI with Ulbricht’s ID and when the evidence they were willing to share with the defense claimed to have first gotten a lead on Silk Road is of interest. All the more so given that the FBI claimed it could legally hack the server because it did not yet know the server was run by an American, and so it treated the Iceland-based server as a foreigner for surveillance purposes.
One thing that means is that DOJ may not have wanted to file paperwork to surveil Ulbricht because admitting they had probable cause to suspect an American was running Silk Road would make their hack illegal (and/or would have required FBI to start treating Ulbricht as the primary target of the investigation; it seems FBI may have been trying to do something else with this investigation). By delaying the time when DOJ took notice of the fact that Silk Road was run by an American, they could continue to squat on Silk Road without explaining to a judge what they were doing there.
The other reason I find this so interesting is because several of the actions to which corrupt DEA agent Carl Force pled guilty — selling fake IDs and providing inside information — took place between June and September 2013, during the precise period when everyone was ignoring Alford’s evidence and the fact that he had entered Ulbricht’s name as a possible alias for the Dread Pirate Roberts into a DEA database. Of particular note, Force’s guilty plea only admitted to selling the fake IDs for 400 bitcoin, and provided comparatively few details about that action, but the original complaint against Force explained he had sold the IDs for 800 bitcoin but refunded Ulbricht 400 bitcoin because “the deal for the fraudulent identification documents allegedly fell through” [emphasis mine].
Were those fake IDs that Force sold Ulbricht the ones seized by Homeland Security and investigated in July 2013? Did the complaint say the deal “allegedly” fell through because it didn’t so much fall through as get thwarted? Did something — perhaps actions by Force — prevent other team members from tying that seizure to Ulbricht? Or did everyone know about it, but pretend not to, until Alford made them pay attention (perhaps with a communications trail that other Feds couldn’t suppress)? Was the ID sale part of the investigation, meant to ID Ulbricht’s identity and location, but Force covered it up?
In other words, given the record of Force’s actions, it seems more likely that at least some people on the investigative team already knew what Alford found in a Google search, but for both investigative (the illegal hack that FBI might have wanted to extend for other investigative reasons) and criminal (the money Force was making) reasons, no one wanted to admit that fact.
Now, I’m not questioning the truth of what Alford told the NYT. But even his story (which is corroborated by people “briefed on the investigation,” but only one person who actually attended any of the meetings for it; most of those people are silent about Alford’s claims) suggests there may be other explanations why no one acted on his tip, particularly given the fact that he appears to have been unable to do database searches himself and that they refused to do further investigation into Ulbricht. (I also wonder whether Alford’s role explains why the government had the IRS in San Francisco investigate Force and corrupt Secret Service Agent Shaun Bridges, rather than New York, where agents would have known these details.)
Indeed, I actually think this complimentary profile might have been a way for Alford to expose further cover-ups in the Silk Road investigation without seeming to do so for any but self-interested reasons. Bridges was sentenced on December 7. Ulbricht was originally supposed to have submitted his opening appellate brief — focusing on Fourth Amendment issues that may be implicated by these details — on December 11, but on December 2, the court extended that deadline until January 12.
I don’t know whether Ulbricht’s defense learned these details. I’m admittedly not familiar enough with the public record to know, though given the emphasis on Tarbell’s declaration as the explanation for how they discovered Ulbricht and the NYT’s assertion Alford’s role and the delay was “largely left out of the documents and proceedings that led to Mr. Ulbricht’s conviction and life sentence this year,” I don’t think it is public. But if they didn’t, then the fact that the investigative team went out of their way to avoid confirming Ulbricht’s readily accessible identity until at least three and probably seven months after they started hacking Silk Road, even while key team members were stealing money from the investigation, might provide important new details about the government’s actions.
And if Alford gets delayed credit for doing simple Google searches as a result, all the better!
You may have heard that Juniper Networks announced what amounts to a backdoor in its virtual private networks products. Here’s Kim Zetter’s accessible intro of what security researchers have learned so far. And here’s some technical background from Matthew Green.
As Zetter summarizes, the short story is that some used weaknesses encouraged by NSA to backdoor the security product protecting a lot of American businesses.
They did this by exploiting weaknesses the NSA allegedly placed in a government-approved encryption algorithm known as Dual_EC, a pseudo-random number generator that Juniper uses to encrypt traffic passing through the VPN in its NetScreen firewalls. But in addition to these inherent weaknesses, the attackers also relied on a mistake Juniper apparently made in configuring the VPN encryption scheme in its NetScreen devices, according to Weinmann and other cryptographers who examined the issue. This made it possible for the culprits to pull off their attack.
As Green describes, the key events probably happened at least as early as 2007 and 2012 (contrary to the presumption of surveillance hawk Stewart Baker looking to scapegoat those calling for more security). Which means this can’t be a response to the Snowden document strongly suggesting the NSA had pushed those weaknesses in Dual_EC.
I find that particularly interesting, because it suggests whoever did this either used public discussions about the weakness of Dual_EC, dating to 2007, to identify and exploit this weakness, or figured out what (it is presumed) the NSA was up to. That suggests two likely culprits for what has been assumed to be a state actor behind this: Israel (because it knows so much about NSA from having partnered on things like StuxNet) or Russia (which was getting records on the FiveEyes’ SIGINT activities from its Canadian spy, Jeffrey Delisle). The UK would be another obvious guess, except an Intercept article describing how NSA helped UK backdoor Juniper suggests they used another method.
Which leads me back to an interesting change I noted between CISA — the bill passed by the Senate back in October — and OmniCISA — the version passed last week as part of the omnibus funding bill. OmniCISA still required the Intelligence Community to provide a report on the most dangerous hacking threats, especially state actors, to the Intelligence Committees. But it eliminated a report for the Foreign Relations Committees on the same topic. I joked at the time that that was probably to protect Israel, because no one wants to admit that Israel spies and has greater ability to do so by hacking than other nation-states, especially because it surely learns our methods by partnering with us to hack Iran.
Whoever hacked Juniper, the whole incident offers a remarkable lesson in the dangers of backdoors. Even as FBI demands a backdoor into Apple’s products, it is investigating who used a prior US-sponsored backdoor to do their own spying.
NSA propagandist John Schindler has used the San Bernardino attack as an opportunity to blame Edward Snowden for the spy world’s diminished effectiveness, again.
Perhaps the most interesting detail in his column is his claim that 80% of thwarted attacks come from an NSA SIGINT hit.
Something like eighty percent of disrupted terrorism cases in the United States begin with a SIGINT “hit” by NSA.
That’s mighty curious, given that defendants in these cases aren’t getting notice of such SIGINT hits, as required by law, as ACLU’s Patrick Toomey reminded just last week. Indeed, the claim is wholly inconsistent with the claims FBI made when it tried to claim the dragnet was effective after the Snowden leaks, and inconsistent with PCLOB’s findings that the FBI generally finds such intelligence on its own. Whatever. I’m sure the discrepancy is one Schindler will be able to explain to defense attorneys when they subpoena him to explain the claim.
Then there’s Schindler’s entirely illogical claim that the shut-down of the phone dragnet just days before the attack might have helped to prevent it.
The recent Congressionally-mandated halt on NSA holding phone call information, so-called metadata, has harmed counterterrorism, though to what extent remains unclear. FBI Director James Comey has stated, “We don’t know yet” whether the curtailing of NSA’s metadata program, which went into effect just days before the San Bernardino attack, would have made a difference. Anti-intelligence activists have predictably said it’s irrelevant, while some on the Right have made opposite claims. The latter have overstated their case but are closer to the truth.
As Mike Lee patiently got Jim Comey to admit last week, if the Section 215 phone dragnet (as opposed to the EO 12333 phone dragnet, which remains in place) was going to prevent this attack, it would have.
Schindler then made an error that obscures one of the many ways the new phone dragnet will be better suited to counterterrorism. Echoing a right wing complaint that the government doesn’t currently review social media accounts as part of the visa process, he claimed “Tashfeen Malik’s social media writings [supporting jihad] could have been easily found.” Yet at least according to ABC, it would not have been so easy. “Officials said that because Malik used a pseudonym in her online messages, it is not clear that her support for terror groups would have become known even if the U.S. conducted a full review of her online traffic.” [See update.] Indeed, authorities found the Facebook post where Malik claimed allegiance to ISIS by correlating her known email with her then unknown alias on Facebook. NSA’s new phone program, because it asks providers for “connections” as well as “contacts,” is far more likely to identify multiple identities that get linked by providers than the old program (though it is less likely to correlate burner identities via bulk analysis).
Really, though, whether or not the dragnet could have prevented San Bernardino which, as far as is evident, was carried out with no international coordination, is sort of a meaningless measure of NSA’s spying. To suggest you’re going to get useful SIGINT about a couple who, after all lived together and therefore didn’t need to use electronic communications devices to plot, is silliness. A number of recent terrorist attacks have been planned by family members, including one cell of the Paris attack and the Charlie Hebdo attack, and you’re far less likely to get SIGINT from people who live together.
Which brings me to the most amazing part of Schindler’s piece. He argues that Americans have developed a sense of security in recent years (he of course ignores right wing terrorism and other gun violence) because “the NSA-FBI combination had a near-perfect track record of cutting short major jihadist attacks on Americans at home since late 2001.” Here’s how he makes that claim.
Making matters worse, most Americans felt reasonably safe from the threat of domestic jihadism in recent years, despite repeated warnings about the rise of the Islamic State and terrible attacks like the recent mass-casualty atrocity in Paris. Although the November 2009 Fort Hood massacre, perpetrated by Army Major Nidal Hasan, killed thirteen, it happened within the confines of a military base and did not involve the general public.
Two months before that, authorities rolled up a major jihadist cell in the New York City area that was plotting complex attacks that would have rivalled the 2005 London 7/7 atrocity in scope and lethality. That plot was backed by Al-Qa’ida Central in Pakistan and might have changed the debate on terrorism in the United States, but it was happily halted before execution – “left of boom” as counterterrorism professionals put it.
Jumping from the 2009 attacks (and skipping the 2009 Undiebomb and 2010 Faisal Shahzad attempts) to the Paris attack allows him to suggest any failure to find recent plots derives from Snowden’s leaks, which first started in June 2013.
However, the effectiveness of the NSA-FBI counterterrorism team has begun to erode in the last couple years, thanks in no small part to the work of such journalists-cum-activists. Since June 2013, when the former NSA IT contactor [sic] Edward Snowden defected to Moscow, leaking the biggest trove of classified material in all intelligence history, American SIGINT has been subjected to unprecedented criticism and scrutiny.
There is, of course, one enormous thing missing from Schindler’s narrative of NSA perfection: the Boston Marathon attack, committed months before the first Snowden disclosures became public. Indeed, even though the NSA was bizarrely not included in a post-Marathon Inspector General review of how the brothers got missed, it turns out NSA did have intelligence on them (Tamerlan Tsarnaev was in international contact with known extremists and also downloaded AQAP’s Inspire magazine repeatedly). Only, that intelligence got missed, even with the multiple warnings from FSB about Tamerlan.
Perhaps Schindler thinks that Snowden retroactively caused the NSA to overlook the intelligence on Tamerlan Tsarnaev? Perhaps Schindler doesn’t consider an attack that killed 3 and injured 260 people a “major jihadist attack”?
It’s very confusing, because I thought the Boston attack was a major terrorist attack, but I guess right wing propagandists trying to score points out of tragedy can ignore such things if it will spoil their tale of perfection.
Update: LAT reports that Malik’s Facebook posts were also private, on top of being written under a pseudonym. Oh, and also in Urdu, a language the NSA has too few translators in. The NSA (but definitely not the State Department) does have the ability to 1) correlate IDs to identify pseudonyms, 2) require providers to turn over private messages — they could use PRISM and 3) translate Urdu to English. But this would be very resources intensive and as soon as State made it a visa requirement, anyone trying to could probably thwart the correlation process.
the names, addresses, lengths of service and electronic communication transaction records [ECTR], to include existing transaction/activity logs and all e-mail header information (not to include message content and/or subject fields) for [the target]
The unsealing of the NSL confirmed what has been public since 2010: that the FBI used to (and may still) demand ECTRs from Internet companies using NSLs.
On December 1, House Judiciary Committee held a hearing on a bill reforming ECPA that has over 300 co-sponsors in the House; on September 9, Senate Judiciary Committee had its own hearing, though some witnesses and members at it generally supported expanded access to stored records, as opposed to the new restrictions embraced by HJC.
Since then, a number of people are arguing FBI should be able to access ECTRs again, as they did in 2004, with no oversight. One of two changes to the version of Senator Tom Cotton’s surveillance bill introduced on December 2 over the version introduced on November 17 was the addition of ECTRs to NSLs (the other was making FAA permanent).
And yesterday, Chuck Grassley (who of course could shape any ECPA reform that went through SJC) invited Jim Comey to ask for ECTR authority to be added to NSLs.
Grassley: Are there any other tools that would help the FBI identify and monitor terrorists online? More specifically, can you explain what Electronic Communications Transactions Record [sic], or ECTR, I think that’s referred to, as acronym, are and how Congress accidentally limited the FBI’s ability to obtain them, with a, obtain them with a drafting error. Would fixing this problem be helpful for your counterterrorism investigations?
Comey: It’d be enormously helpful. There is essentially a typo in the law that was passed a number of years ago that requires us to get records, ordinary transaction records, that we can get in most contexts with a non-court order, because it doesn’t involve content of any kind, to go to the FISA Court to get a court order to get these records. Nobody intended that. Nobody that I’ve heard thinks that that’s necessary. It would save us a tremendous amount of work hours if we could fix that, without any compromise to anyone’s civil liberties or civil rights, everybody who has stared at this has said, “that’s actually a mistake, we should fix that.”
That’s actually an unmitigated load of bullshit on Comey’s part, and he should be ashamed to make these claims.
As a reminder, the “typo” at issue is not in fact a typo, but a 2008 interpretation from DOJ’s Office of Legal Counsel, which judged that FBI could only get what the law said it could get with NSLs. After that happened — a DOJ IG Report laid out in detail last year — a number (but not all) tech companies started refusing to comply with NSLs requesting ECTRs, starting in 2009.
The decision of these [redacted] Internet companies to discontinue producing electronic communication transactional records in response to NSLs followed public release of a legal opinion issued by the Department’s Office of Legal Counsel (OLC) regarding the application of ECPA Section 2709 to various types of information. The FBI General Counsel sought guidance from the OLC on, among other things, whether the four types of information listed in subsection (b) of Section 2709 — the subscriber’s name, address, length of service, and local and long distance toll billing records — are exhaustive or merely illustrative of the information that the FBI may request in an NSL. In a November 2008 opinion, the OLC concluded that the records identified in Section 2709(b) constitute the exclusive list of records that may be obtained through an ECPA NSL.
Although the OLC opinion did not focus on electronic communication transaction records specifically, according to the FBI, [redacted] took a legal position based on the opinion that if the records identified in Section 2709(b) constitute the exclusive list of records that may be obtained through an ECPA NSL, then the FBI does not have the authority to compel the production of electronic communication transactional records because that term does not appear in subsection (b).
Even before that, in 2007, FBI had developed a new definition of what it could get using NSLs. Then, in 2010, the Administration proposed adding ECTRs to NSLs. Contrary to Comey’s claim, plenty of people objected to such an addition, as this 2010 Julian Sanchez column, which he could re-release today verbatim, makes clear.
They’re calling it a tweak — a “technical clarification” — but make no mistake: The Obama administration and the FBI’s demand that Congress approve a huge expansion of their authority to obtain the sensitive Internet records of American citizens without a judge’s approval is a brazen attack on civil liberties.
Congress would be wise to specify in greater detail just what are the online equivalents of “toll billing records.” But a blanket power to demand “transactional information” without a court order would plainly expose a vast range of far more detailed and sensitive information than those old toll records ever provided.
Consider that the definition of “electronic communications service providers” doesn’t just include ISPs and phone companies like Verizon or Comcast. It covers a huge range of online services, from search engines and Webmail hosts like Google, to social-networking and dating sites like Facebook and Match.com to news and activism sites like RedState and Daily Kos to online vendors like Amazon and Ebay, and possibly even cafes like Starbucks that provide WiFi access to customers. And “transactional records” potentially covers a far broader range of data than logs of e-mail addresses or websites visited, arguably extending to highly granular records of the data packets sent and received by individual users.
As the Electronic Frontier Foundation has argued, such broad authority would not only raise enormous privacy concerns but have profound implications for First Amendment speech and association interests. Consider, for instance, the implications of a request for logs revealing every visitor to a political site such as Indymedia. The constitutionally protected right to anonymous speech would be gutted for all but the most technically savvy users if chat-forum participants and blog authors could be identified at the discretion of the FBI, without the involvement of a judge.
That legislative effort didn’t go anywhere, so instead (the IG report explained) FBI started to use Section 215 orders to obtain that data. That constituted a majority of 215 orders in 2010 and 2011 (and probably has since, creating the spike in numbers since that year, as noted in the table above).
Supervisors in the Operations Section of NSD, which submits Section 215 applications to the FISA Court, told us that the majority of Section 215 applications submitted to the FISA Court [redacted] in 2010 and [redacted] in 2011 — concerned requests for electronic communication transaction records.
The NSD supervisors told us that at first they intended the [3.5 lines redacted] They told us that when a legislative change no longer appeared imminent and [3 lines redacted] and by taking steps to better streamline the application process.
But the other reason Comey’s claim that getting this from NSL’s would not pose “any compromise to anyone’s civil liberties or civil rights” is bullshit is because the migration of ECTR requests to Section 215 orders also appears to have led the FISA Court to finally force FBI to do what the 2006 reauthorization of the PATRIOT Act required it do: minimize the data it obtains under 215 orders to protect Americans’ privacy.
By all appearances, the rubber-stamp FISC believed these ECTR requests represented a very significant compromise to people’s civil liberties and civil rights and so finally forced FBI to follow the law requiring them to minimize the data.
Which is probably what this apparently redoubled effort to let FBI obtain the online lives of Americans (remember, this must be US persons, otherwise the FBI could use PRISM to obtain the data) using secret requests that get no oversight: an attempt to bypass whatever minimization procedures — and the oversight that comes with it — the FISC imposed.
And remember: with the passage of USA Freedom Act, the FBI doesn’t have to wait to get these records (though they are probably prospective, just like the old phone dragnet was), they can obtain an emergency order and then fill out the paperwork after the fact.
For some reason — either the disclosure in Merrill’s suit that FBI believed they could do this (which has been public since 2010 or earlier), or the reality that ECPA will finally get reformed — the Intelligence Community is asserting the bogus claims they tried to make in 2010 again. Yet there’s even more evidence then there was then that FBI wants to conduct intrusive spying without real oversight.
I’ve complained about Dianne Feinstein’s inconsistency on cybersecurity, specifically as it relates to Sony, before. The week before the attack on Paris, cybersecurity was the biggest threat, according to her. And Sony was one of the top targets, both of criminal identity theft and — if you believe the Administration — nation-states like North Korea. If you believe that, you believe that Sony should have the ability to use encryption to protect its business and users. But, in the wake of Paris and Belgian Interior Minister Jan Jambon’s claim that terrorists are using Playstations, Feinstein changed her tune, arguing serial hacking target Sony should not be able to encrypt its systems to protect users.
Her concerns too a bizarre new twist in an FBI oversight hearing today. Now, she’s concerned that if a predator decides to target her grandkids while they’re playing on a Playstation, that will be encrypted.
I have concern about a Playstation which my grandchildren might use and a predator getting on the other end, talking to them, and it’s all encrypted.
Someone needs to explain to DiFi that her grandkids are probably at greater risk from predators hacking Sony to get personal information about them to then use that to abduct or whatever them.
Sony’s the perfect example of how security hawks like Feinstein need to choose: either her grandkids face risks because Sony doesn’t encrypt its systems, or they do because it does.
The former risk is likely the much greater risk.
Charlie Savage has a story that confirms (he linked some of my earlier reporting) something I’ve long argued: NSA was willing to shut down the Internet dragnet in 2011 because it could do what it wanted using other authorities. In it, Savage points to an NSA IG Report on its purge of the PRTT data that he obtained via FOIA. The document includes four reasons the government shut the program down, just one of which was declassified (I’ll explain what is probably one of the still-classified reasons probably in a later post). It states that SPCMA and Section 702 can fulfill the requirements that the Internet dragnet was designed to meet. The government had made (and I had noted) a similar statement in a different FOIA for PRTT materials in 2014, though this passage makes it even more clear that SPCMA — DOD’s self-authorization to conduct analysis including US persons on data collected overseas — is what made the switch possible.
It’s actually clear there are several reasons why the current plan is better for the government than the previous dragnet, in ways that are instructive for the phone dragnet, both retrospectively for the USA F-ReDux debate and prospectively as hawks like Tom Cotton and Jeb Bush and Richard Burr try to resuscitate an expanded phone dragnet. Those are:
Both the domestic Internet and phone dragnet limited their use to counterterrorism. While I believe the Internet dragnet limits were not as stringent as the phone ones (at least in pre 2009 shutdown incarnation), they both required that the information only be disseminated for a counterterrorism purpose. The phone dragnet, at least, required someone sign off that’s why information from the dragnet was being disseminated.
Admittedly, when the FISC approved the use of the phone dragnet to target Iran, it was effectively authorizing its use for a counterproliferation purpose. But the government’s stated admissions — which are almost certainly not true — in the Shantia Hassanshahi case suggest the government would still pretend it was not using the phone dragnet for counterproliferation purposes. The government now claims it busted Iranian-American Hassanshahi for proliferating with Iran using a DEA database rather than the NSA one that technically would have permitted the search but not the dissemination, and yesterday Judge Rudolph Contreras ruled that was all kosher.
But as I noted in this SPCMA piece, the only requirement for accessing EO 12333 data to track Americans is a foreign intelligence purpose.
Additionally, in what would have been true from the start but was made clear in the roll-out, NSA could use this contact chaining for any foreign intelligence purpose. Unlike the PATRIOT-authorized dragnets, it wasn’t limited to al Qaeda and Iranian targets. NSA required only a valid foreign intelligence justification for using this data for analysis.
The primary new responsibility is the requirement:
- to enter a foreign intelligence (FI) justification for making a query or starting a chain,[emphasis original]
Now, I don’t know whether or not NSA rolled out this program because of problems with the phone and Internet dragnets. But one source of the phone dragnet problems, at least, is that NSA integrated the PATRIOT-collected data with the EO 12333 collected data and applied the protections for the latter authorities to both (particularly with regards to dissemination). NSA basically just dumped the PATRIOT-authorized data in with EO 12333 data and treated it as such. Rolling out SPCMA would allow NSA to use US person data in a dragnet that met the less-restrictive minimization procedures.
That means the government can do chaining under SPCMA for terrorism, counterproliferation, Chinese spying, cyber, or counter-narcotic purposes, among others. I would bet quite a lot of money that when the government “shut down” the DEA dragnet in 2013, they made access rules to SPCMA chaining still more liberal, which is great for the DEA because SPCMA did far more than the DEA dragnet anyway.
So one thing that happened with the Internet dragnet is that it had initial limits on purpose and who could access it. Along the way, NSA cheated those open, by arguing that people in different function areas (like drug trafficking and hacking) might need to help out on counterterrorism. By the end, though, NSA surely realized it loved this dragnet approach and wanted to apply it to all NSA’s functional areas. A key part of the FISC’s decision that such dragnets were appropriate is the special need posed by counterterrorism; while I think they might well buy off on drug trafficking and counterproliferation and hacking and Chinese spying as other special needs, they had not done so before.
The other thing that happened is that, starting in 2008, the government started putting FBI in a more central role in this process, meaning FBI’s promiscuous sharing rules would apply to anything FBI touched first. That came with two benefits. First, the FBI can do back door searches on 702 data (NSA’s ability to do so is much more limited), and it does so even at the assessment level. This basically puts data collected under the guise of foreign intelligence at the fingertips of FBI Agents even when they’re just searching for informants or doing other pre-investigative things.
In addition, the minimization procedures permit the FBI (and CIA) to copy entire metadata databases.
FBI can “transfer some or all such metadata to other FBI electronic and data storage systems,” which seems to broaden access to it still further.
Users authorized to access FBI electronic and data storage systems that contain “metadata” may query such systems to find, extract, and analyze “metadata” pertaining to communications. The FBI may also use such metadata to analyze communications and may upload or transfer some or all such metadata to other FBI electronic and data storage systems for authorized foreign intelligence or law enforcement purposes.
In this same passage, the definition of metadata is curious.
For purposes of these procedures, “metadata” is dialing, routing, addressing, or signaling information associated with a communication, but does not include information concerning the substance, purport, or meaning of the communication.
I assume this uses the very broad definition John Bates rubber stamped in 2010, which included some kinds of content. Furthermore, the SMPs elsewhere tell us they’re pulling photographs (and, presumably, videos and the like). All those will also have metadata which, so long as it is not the meaning of a communication, presumably could be tracked as well (and I’m very curious whether FBI treats location data as metadata as well).
Whereas under the old Internet dragnet the data had to stay at NSA, this basically lets FBI copy entire swaths of metadata and integrate it into their existing databases. And, as noted, the definition of metadata may well be broader than even the broadened categories approved by John Bates in 2010 when he restarted the dragnet.
So one big improvement between the old domestic Internet dragnet and SPCMA (and 702 to a lesser degree, and I of course, improvement from a dragnet-loving perspective) is that the government can use it for any foreign intelligence purpose.
At several times during the USA F-ReDux debate, surveillance hawks tried to use the “reform” to expand the acceptable uses of the dragnet. I believe controls on the new system will be looser (especially with regards to emergency searches), but it is, ostensibly at least, limited to counterterrorism.
One way USA F-ReDux will be far more liberal, however, is in dissemination. It’s quite clear that the data returned from queries will go (at least) to FBI, as well as NSA, which means FBI will serve as a means to disseminate it promiscuously from there.
Another thing replacing the Internet dragnet with 702 access does it provide another way to correlate multiple identities, which is critically important when you’re trying to map networks and track all the communication happening within one. Under 702, the government can obtain not just Internet “call records” and the content of that Internet communication from providers, but also the kinds of thing they would obtain with a subpoena (and probably far more). As I’ve shown, here are the kinds of things you’d almost certainly get from Google (because that’s what you get with a few subpoenas) under 702 that you’d have to correlate using algorithms under the old Internet dragnet.
Every single one of these data points provides a potentially new identity that the government can track on, whereas the old dragnet might only provide an email and IP address associated with one communication. The NSA has a great deal of ability to correlate those individual identifiers, but — as I suspect the Paris attack probably shows — that process can be thwarted somewhat by very good operational security (and by using providers, like Telegram, that won’t be as accessible to NSA collection).
This is an area where the new phone dragnet will be significantly better than the existing phone dragnet, which returns IMSI, IMEI, phone number, and a few other identifiers. But under the new system, providers will be asked to identify “connected” identities, which has some limits, but will nonetheless pull some of the same kind of data that would come back in a subpoena.
While replacing the domestic Internet dragnet with SPCMA provides additional data with which to do correlations, much of that might fall under the category of additional functionality. There are two obvious things that distinguish the old Internet dragnet from what NSA can do under SPCMA, though really the possibilities are endless.
The first of those is content scraping. As the Intercept recently described in a piece on the breathtaking extent of metadata collection, the NSA (and GCHQ) will scrape content for metadata, in addition to collecting metadata directly in transit. This will get you to different kinds of connection data. And particularly in the wake of John Bates’ October 3, 2011 opinion on upstream collection, doing so as part of a domestic dragnet would be prohibitive.
In addition, it’s clear that at least some of the experimental implementations on geolocation incorporated SPCMA data.
I’m particularly interested that one of NSA’s pilot co-traveler programs, CHALKFUN, works with SPCMA.
Chalkfun’s Co-Travel analytic computes the date, time, and network location of a mobile phone over a given time period, and then looks for other mobile phones that were seen in the same network locations around a one hour time window. When a selector was seen at the same location (e.g., VLR) during the time window, the algorithm will reduce processing time by choosing a few events to match over the time period. Chalkfun is SPCMA enabled1.
1 (S//SI//REL) SPCMA enables the analytic to chain “from,” “through,” or “to” communications metadata fields without regard to the nationality or location of the communicants, and users may view those same communications metadata fields in an unmasked form. [my emphasis]
Now, aside from what this says about the dragnet database generally (because this makes it clear there is location data in the EO 12333 data available under SPCMA, though that was already clear), it makes it clear there is a way to geolocate US persons — because the entire point of SPCMA is to be able to analyze data including US persons, without even any limits on their location (meaning they could be in the US).
That means, in addition to tracking who emails and talks with whom, SPCMA has permitted (and probably still does) permit NSA to track who is traveling with whom using location data.
Finally, one thing we know SPCMA allows is tracking on cookies. I’m of mixed opinion on whether the domestic Internet ever permitted this, but tracking cookies is not only nice for understanding someone’s browsing history, it’s probably critical for tracking who is hanging out in Internet forums, which is obviously key (or at least used to be) to tracking aspiring terrorists.
Most of these things shouldn’t be available via the new phone dragnet — indeed, the House explicitly prohibited not just the return of location data, but the use of it by providers to do analysis to find new identifiers (though that is something AT&T does now under Hemisphere). But I would suspect NSA either already plans or will decide to use things like Supercookies in the years ahead, and that’s clearly something Verizon, at least, does keep in the course of doing business.
All of which is to say it’s not just that the domestic Internet dragnet wasn’t all that useful in its current form (which is also true of the phone dragnet in its current form now), it’s also that the alternatives provided far more than the domestic Internet did.
Jim Comey recently said he expects to get more information under the new dragnet — and the apparent addition of another provider already suggests that the government will get more kinds of data (including all cell calls) from more kinds of providers (including VOIP). But there are also probably some functionalities that will work far better under the new system. When the hawks say they want a return of the dragnet, they actually want both things: mandates on providers to obtain richer data, but also the inclusion of all Americans.