Last week, Charlie Savage liberated additional disclosures on three IG reports he liberated last year: the 2007 NSL report, the 2009 Stellar Wind report, and a 2012 DOJ IG Section 702 report. With the NSL report, DOJ disclosed numbers that I believe were otherwise public or intuitable. With the Stellar Wind report, DOJ disclosed additional information on how the Department was dodging its obligation to notify defendants of the surveillance behind their cases; I hope to return to this issue.
By far the most important new disclosure, however, pertains to the FBI’s reporting on reports on US persons identified under Section 702 (see pages 17-18, highlighted by Savage here). Introducing the Executive Summary description of whether FBI was fulfilling reporting requirements, the report explained that the IG had adopted a fairly strict understanding of what constituted a US person dissemination.
Although the key passage is redacted (and the report body on this topic is almost entirely redacted), it’s clear that the IG considered reports that identified a US person via something other than his or her name without sharing the content of communications constituted a report “with respect to” 702 acquisitions.
The FBI had been arguing about these definitions internally and with DOJ’s IG since at least 2006, when it failed to comply with the legally mandated requirement for new minimization procedures to go with Section 215. One way to understand an early version of the debate is whether, by retaining call records that don’t include a name but do include phone numbers that clearly belong to a specific person, the FBI was retaining US person identifying information. For obvious reasons — because if their minimization procedures treated a phone number as US person identifying information, then it would mean it couldn’t retain 5 years of phone records — FBI didn’t want to treat a person’s unique identifiers as person identifying information. The minimization procedures adopted in 2013 must mirror this problem given that FBI and NSA kept those records for another two years.
It appears the IG found the FBI’s reporting lacking in several ways: it did not include Section 702 related reports that identify a US person if that person (which I assume to mean that person’s identity) was identified via other means, and argued FBI should also count reports if the US person information in it was publicly available. In addition, the IG considered a metadata reference to also constitute a US person reference.
This suggests the FBI was, until 2012, at least, not including the sharing of an email or even a report that identified the person tied to an email if it found that email, but not that person’s identity, via Section 702 in its reports to Congress. Imagine, for example, if FBI didn’t consider my emptywheel email personally identifying of me, emptywheel, until such time as it publicly tied that email address to me. It would be bullshit, but we know that seems to be the kind of game FBI was and probably still is playing.
I’m particularly interested in this because of a speech Dianne Feinstein made in December 2012 — presumably after FBI had made whatever response they might make to this IG report — that named a number of people as if they had been IDed using Section 702. But when several of them demanded notice of Section 702 surveillance, none of them got it, and Feinstein and the Senate’s lawyer insisted they could not make anything of her insinuation that Section 702 had discovered them.
In other words, the two standards at issue here — the minimization procedures standard and the notice one — may be implicated in DOJ’s opaque notice guidelines. We don’t know whether it is or not, of course, but if it is, it would suggest that DOJ is limiting 702 notices based on what kinds of identifiers 702 produces.
1/13: Tweaked this post for clarity. In addition, note these letters from the Brennan Center which relate to this issue.
Back when I reviewed the goodies the House Intelligence Committee had given James Clapper in this year’s Intelligence Authorization, I noted the bill eliminated this report on potential conflicts in outside employment (see clause u).
The Director of National Intelligence shall annually submit to the congressional intelligence committees a report describing all outside employment for officers and employees of elements of the intelligence community that was authorized by the head of an element of the intelligence community during the preceding calendar year.
That change — which will make it harder for people to track the kinds of conflicts of interest a number of top NSA officials recently got caught with — survived in the Omnibus into which the Intelligence Authorization got integrated. Which probably means we’ll be seeing more spooks getting paid by contractors on the side.
Yesterday, WaPo described a reporting requirement that had been in the Senate Intelligence Authorization, but got watered down in the Omnibus: a report on promotions revealing whether those being promoted were “unfit or unqualified.”
Under a provision drafted by the Senate Intelligence Committee this year, intelligence agencies would have been required to regularly provide names of those being promoted to top positions and disclose any “significant and credible information to suggest that the individual is unfit or unqualified.”
More recently, a top CIA manager who had been removed from his job for abusive treatment of subordinates was reinstated this year as deputy chief for counterintelligence at the Counterterrorism Center.
U.S. officials offered multiple explanations for Clapper’s objections. Several said that his main concern was the bureaucratic workload that would be generated by legislation requiring so much detail about potentially hundreds of senior employees across the U.S. intelligence community.
But others said that U.S. spy chiefs chafed at the idea of subjecting their top officials to such congressional scrutiny and went so far as to warn that candidates for certain jobs would probably withdraw.
Lawmakers were told that “some intelligence personnel would be reluctant to seek promotions out of concern that information about them would be presented to the Hill,” said a U.S. official involved in the discussions.
So he balked and Congress watered down the requirement. Here’s what remains of the measure:
(a) DIRECTIVE REQUIRED.—The Director of National Intelligence shall issue a directive containing a written policy for the timely notification to the congressional intelligence committees of the identities of individuals occupying senior level positions within the intelligence community.
The fine print on the requirement probably provides ways for Clapper to squish out of it in many cases by invoking covert status (which, in turn, likely means CIA will expand its current practice of pretending top managers are covert to protect them from scrutiny) or otherwise claiming senior people are not sufficiently senior to require notice.
So rather than preventing the CIA and other agencies from promoting abusive incompetents, the measure will likely lead to them being hidden further behind CIA’s secrecy.
Which is interesting, especially given another Intel Authorization measure that survived in the Omnibus, that I earlier described as an effort to make sure spooks and those in sensitive positions aren’t joining EFF or similar organizations.
The committee description of this section explains it will require DNI to do more checks on spooks (actually spooks and “sensitive” positions, which isn’t full clearance).
Section 306 directs the Director of National Intelligence (DNI) to develop and implement a plan for eliminating the backlog of overdue periodic investigations, and further requires the DNI to direct each agency to implement a program to provide enhanced security review to individuals determined eligible for access to classified information or eligible to hold a sensitive position.
These enhanced personnel security programs will integrate information relevant and appropriate for determining an individual’s suitability for access to classified information; be conducted at least 2 times every 5 years; and commence not later than 5 years after the date of enactment of the Fiscal Year 2016 Intelligence Authorization Act, or the elimination of the backlog of overdue periodic investigations, whichever occurs first.
Among the things ODNI will use to investigate its spooks are social media, commercial data sources, and credit reports. Among the things it is supposed to track is “change in ideology.” I’m guessing they’ll do special checks for EFF stickers and hoodies, which Snowden is known to have worn without much notice from NSA.
Remember, one complaint Clapper had about the gutted requirement he identify the abusive incompetents being promoted at intelligence agencies is the added bureaucracy of tracking just those being promoted in management ranks. But he apparently had no problem with a requirement that ODNI track the social media of everyone at all agencies to make sure they’re going to keep secrets and don’t harbor any “ideology” changes like support for the Bill of Rights.
That is, Clapper’s perfectly willing to expand his bureaucracy to look for leakers, but not to weed out the dangerously incompetent people ordering potential leakers around.
Apparently, to James Clapper, people who might leak about those unfit for management are more dangerous insider threats than having entire centers run by people unfit for management.
I’ve complained about Dianne Feinstein’s inconsistency on cybersecurity, specifically as it relates to Sony, before. The week before the attack on Paris, cybersecurity was the biggest threat, according to her. And Sony was one of the top targets, both of criminal identity theft and — if you believe the Administration — nation-states like North Korea. If you believe that, you believe that Sony should have the ability to use encryption to protect its business and users. But, in the wake of Paris and Belgian Interior Minister Jan Jambon’s claim that terrorists are using Playstations, Feinstein changed her tune, arguing serial hacking target Sony should not be able to encrypt its systems to protect users.
Her concerns too a bizarre new twist in an FBI oversight hearing today. Now, she’s concerned that if a predator decides to target her grandkids while they’re playing on a Playstation, that will be encrypted.
I have concern about a Playstation which my grandchildren might use and a predator getting on the other end, talking to them, and it’s all encrypted.
Someone needs to explain to DiFi that her grandkids are probably at greater risk from predators hacking Sony to get personal information about them to then use that to abduct or whatever them.
Sony’s the perfect example of how security hawks like Feinstein need to choose: either her grandkids face risks because Sony doesn’t encrypt its systems, or they do because it does.
The former risk is likely the much greater risk.
Over at Salon, I’ve got a piece addressing the things we call terror in this country that mostly argues, “In the wake of the Planned Parenthood attack, both the right and the left should redouble our commitment to distinguishing speech from murder.” But I also start by laying out how various mass killings get labeled as terrorism.
Commentary on the deadly mass shootings over the past week — last Friday’s at a Planned Parenthood in Colorado, and yesterday’s in San Bernardino, Calif. — has thus far has focused on whether the attacks were terroristic in nature.
Such a designation would suggest violence in support of political ends but also to a set of potential criminal charges. In both cases, there were at least initial reports the perpetrators tried to set off an explosive device, in Planned Parenthood shooter’s case a propane tank (though since initial reports, police have said nothing about whether this was his intent), in the alleged San Bernardino attackers’ case, several pipe bombs. If authorities do confirm these were bombs, both cases might be treated legally as domestic terrorism. Because of an asymmetry in our laws on terrorism and our collection of online communications, if the San Bernardino shooters can be shown to have been inspired by a foreign terrorist organization, like ISIS — as now appears to be the case — their attack would be treated as terrorism even without a bomb.
At Lawfare, former NSA attorney Susan Hennessey has a piece outlining at length much the same thing. If you want a detailed legal treatment of what I summarized in that Salon paragraph, written by an actual lawyer, hers is a decent piece to read.
But her piece is far more interesting as an artifact of a certain type of thinking, complete with some really important blind spots about how the law actually gets implemented. Those blind spots let Hennessey claim, falsely, that the different treatment of international and domestic terrorism does not result in disparate treatment for Muslims.
Hennessey lays out the law behind terrorism charges and argues (and I agree) that the distinction is mostly investigative.
The most consequential citation to the § 2331(5) domestic terrorism definition is in the Attorney General Guidelines for Domestic FBI Operations which authorizes the FBI to conduct “enterprise investigations” for the purpose of establishing the factual basis that reasonably indicates a group has or intends to commit an act of “domestic terrorism as defined in 18 U.S.C. § 2331(5) involving a violation of federal criminal law”:
As a consequence, labeling an act one of “domestic terrorism” is most important in the context of investigations, and not ultimately indictments.
She claims it’s okay to treat domestic “ideologically-motivated mass shootings” (which is a great term) as murder because states have the capacity to investigate them.
We don’t want to have a general federal murder statute, and the states are perfectly capable of prosecuting murders of American citizens within their borders, even those that are motivated by politics.
States have no lack of capacity to investigate shootings, no lack of authorities to prosecute them, and mass shooters have tended to be very local in the past.
Of course, interest in investigating is very different from capacity to. And for many forms of right wing terrorism — the targeting of minorities and health clinics — there has been local disinterest in investigating the networks behind them. That problem has been addressed in both cases, though not by making these crimes terrorism, but rather by creating “hate crime” and Freedom of Access to Clinic Entrances laws that can give the Feds jurisdiction. But that jurisdiction does not, then, get those crimes that require Federal investigation or prosecution because localities are disinterested treated as terrorism crimes, especially not prospectively. That means the FBI will be bureaucratically less focused on and less rewarded for the investigation of them, and they’ll more often intervene after an attack than before, to prevent it. That bureaucratic focus shows up in Congressional tracking of terrorism cases and White House focus on them, which is another way of saying FBI’s bosses and purse-strings pay closer attention to the stuff that gets charged as terrorism.
Hennessey claims this doesn’t result in any disparate treatment of Muslims. To prove that there is no disparity arising out of the limitation of domestic terrorism mostly to crimes involving bombs, she lays out a list of Muslims who killed using guns that didn’t get charged with terrorism. Here’s just part of her discussion (in the later part, she presumes attackers who died would not have been charged as terrorists).
By and large, violent extremists of all stripes who use bombs are prosecuted as terrorists, while violent extremists of all stripes who use guns get prosecuted as simple murderers. Consider Nidal Hassan, the Fort Hood shooter who professed an agenda of radical Islam, yet was prosecuted by the military for simple murder. Despite overwhelming calls to categorize the act as terrorism, the Pentagon treated it as an act of workplace violence. Shortly before the Fort Hood shooting in 2009, Abdulhakim Mujahid Muhammad killed two soldiers in front of a Little Rock, Arkansas recruiting station. Following the shooting, Muhammad expressed to investigators allegiance to al Qaeda in the Arabian Peninsula. Yet he was prosecuted by the state of Arkansas and ultimately pled guilty to capital murder charges, not terrorism. The most dramatic example may be that of Mir Aimal Kasi who, in 1993, shot two CIA employees dead outside the agency’s entrance in Langley, Virginia. Kasi’s stated motive was anger over the US treatment of people in the Middle East, particularly Palestinians. He fled to Pakistan, and following a four-year international manhunt and joint CIA-FBI capture operation in Pakistan, he was rendered back to the United States. How was he charged? Not with terrorism. Kasi was convicted by the state of Virginia on capital murder charges and executed in 2002.
But, even ignoring how she presumes certain charging decisions had some attackers not died, this is not enough to prove her claim. To prove it, she’d also have to prove that non-Muslims who use bombs in “ideologically-motivated” killings do get charged as terrorists, and that the ability to charge domestic crimes using bombs is not used by FBI to create terrorism prosecutions. With a few notable exceptions, those things aren’t true.
There are a number of cases of right wingers who could have gotten charged with a terrorist WMD charge but didn’t. Most notably, there’s Eric Rudolph — who not only serially bombed abortion clinics but bombed the Atlanta Olympics, then escaped across state lines. He was charged with explosives charges but not given a terrorism enhancement (he is serving multiple life sentences in any case). Indeed, his indictment — signed by current Deputy Attorney General Sally Quillian Yates when she was an AUSA — did not once call the series of bombings and threats Rudolph carried out terrorism, even though bombing the Olympics is a quintessential example of terrorism.
Then there’s another Sally Yates case (this time as US Attorney), the Waffle House plot, in which four geriatric right wingers plotted to use weapons and ricin dropped from a plane to overthrow the federal government. They actually bought what they thought was explosives from the FBI, but did not get charged with terrorism for either the ricin or the presumed explosives.
There’s Schaffer Cox, who got busted for conspiring to kill federal authorities; he talked about using grenades but did not get charged with a WMD count. There’s Benjamin Kuzelka, the guy with Nazi propaganda trying to make TATP. There’s William Krar, the white supremacist caught with massive explosives who eventually pled to one chemical weapons charge, but without exposing what was presumed to be a broader network.
Meanwhile, there are just three cases I know of where non-Muslims did get charged with bomb-related terrorism charges — and to some degree, these exceptions prove the rule (I’m not treating ACTA “animal terrorism” cases, which introduce another order of magnitude of absurdity into the issue).
There is the Spokane MLK bomber Kevin Harpham, whose sophisticated bomb got found before it went off. Harpham’s plea deal retained a terrorism WMD charge, but his sentence was lighter than similarly situated Muslim terrorists.
There is the Hutaree group charged on multiple counts of trying to overthrow the government, including with bombs. The terrorism related charges against the Hutaree were thrown out entirely (in part because they were charged badly), and most of the 9 of them went free.
The only case I know of that is parallel to the way many Muslims get treated is that of the Occupy Cleveland participants whose discussion of vandalism got inflamed — and focused on a target that might merit federal charges — by an informant who also plied them with jobs and other enticements. After pressing buttons they thought would detonate a bomb, they got charged as terrorists. The judge thought the punishments requested by the government “grotesque” and sentenced them much more lightly (though still to upwards from 6 years).
I say the Occupy Cleveland case is parallel because for the overwhelming number of cases charged as Islamic terrorism, the FBI supplies the bomb and often picks the target for a “wayward knucklehead” who then gets charged with terrorism (though judges almost never consider those charges “grotesque”). There were hundreds of them already by 2011. Often, the target would have not had the ability — in terms of money, experience, and other resources — to conduct the “bomb” plot by himself. So when Hennessey justifies charging bomb but not gun crimes as terrorism because “bombers tend to be more organized in interstate groups,” what she really means is that the FBI is an organized interstate group, because that’s the organizing force that provides the expertise in the overwhelming majority of terrorism cases.
Which brings me to the most alarming claim that Hennessey makes, in the midst of an argument that the civil liberties cost of treating domestic terrorism like international terrorism is too high: that what she calls “complex legal obligations” on using “incidental” collection reflects heightened privacy concerns.
The complex legal obligations generated by incidental or intentional focus on US persons reflects the heightened privacy and civil liberties concerns at stake when we use foreign intelligence tools domestically. And rightly so, as the process of investigating and prosecuting domestic terrorists and homegrown violent extremists risks infringing into areas of constitutionally protected speech, religion, and association.
To be fair, she was an NSA lawyer, not an FBI lawyer, which is why I consider this surprising claim a “blind spot.” The NSA does have to treat incidentally US person data carefully; they actually do very few back door searches of incidentally collected data.
But many (if not most) counterterrorism targets collected under Section 702 and all traditional FISA ones get shared directly with the FBI. And the FBI can access and use the incidentally collected data not only for formal investigations, but also for assessments, such as called in tips or even just to find stuff to use to coerce people to turn informant.
For incidentally collected US person data that resides in FBI’s databases, in other words, there are no complex legal obligations on incidental collection. None. It just sits there for 30 years at potential risk of contributing to a prosecution. And that’s a big source of the stings the FBI starts, when it throws an informant at some kid downloading Inspire or talking in a chat room to try to take them off the street by inventing a bomb plot.
Update: In her response to this piece, Hennessey makes it clear she believes this passage is wrong–and with respect to whether unreviewed data sits in FBI servers for 30 years, it is; with respect to how much CT data FBI gets directly it may be. But as to its accessibility, per the PCLOB report on 702, it is not. So I’m replacing this paragraph with this language from PCLOB.
Because they are not identified as such in FBI systems, the FBI does not track the number of queries using U.S. person identifiers. The number of such queries, however, is substantial for two reasons.
First, the FBI stores electronic data obtained from traditional FISA electronic surveillance and physical searches, which often target U.S. persons, in the same repositories as the FBI stores Section 702–acquired data, which cannot be acquired through the intentional targeting of U.S. persons. As such, FBI agents and analysts who query data using the identifiers of their U.S. person traditional FISA targets will also simultaneously query Section 702–acquired data.
Second, whenever the FBI opens a new national security investigation or assessment, FBI personnel will query previously acquired information from a variety of sources, including Section 702, for information relevant to the investigation or assessment. With some frequency, FBI personnel will also query this data, including Section 702– acquired information, in the course of criminal investigations and assessments that are unrelated to national security efforts. In the case of an assessment, an assessment may be initiated “to detect, obtain information about, or prevent or protect against federal crimes or threats to the national security or to collect foreign intelligence information.”
Section 702–acquired communications that have not been reviewed must be aged off FBI systems no later than five years after the expiration of the Section 702 certifications under which the data was acquired.
So if conducting network investigations of “domestic terrorists and homegrown violent extremists risks infringing into areas of constitutionally protected speech, religion, and association,” — and I absolutely agree it does — then it does for Muslims as well, except that because we’ve made the terrorism Muslims might engage in a different category of collection and thrown billions of dollars at it, they’re not accorded that protection.
Finally, there’s one other problem with the assumption that international terrorism requires enterprise investigations but domestic terrorism doesn’t (that’s not actually what happens; FBI does do enterprise investigations of domestic terrorism, just with a different focus and different SIGINT tools). People get killed as a result.
Consider Kevin Harpham’s case, the MLK bomber. The government used the correspondence Harpham had while in jail with known white supremacist Frazier Glenn Miller (who was, I believe, then in North Carolina but would move to Kansas) to call for an enhanced sentence. Miller’s offer to raise money for Harpham might have been evidence of an interstate network worth tracking. But the FBI appears not to have done so, though, given that Miller went on to murder three people he believed (wrongly) to be Jewish two years later. Miller got charged at the state level and will be executed.
Similarly, supporters of the militant anti-choice group Army of God have corresponded with people who had been previously convicted for attacks before attacking others (in addition to publishing Rudolph’s memoir), and George Tiller’s murderer, Scott Roeder, has issued threats while talking with Army of God supporters from prison as recently as two years ago. These things have happened across state boundaries, so would be tougher to investigate at the local level. Like ISIS or AQAP, Army of God makes how-to materials available to its supporters.
Indeed, the way in which Army of God fans have networked is particularly important given this claim from Hennessey:
[With the Planned Parenthood attack], there is no apparent evidence that the perpetrator was acting as part of a larger group, and thus no need for the federal government to pursue an enterprise investigation.
I presume she isn’t privy to the evidence discovered so far, so in fact has no basis to say this. But even the public reporting poses good reason to look for such connections. Six years ago, Dear considered the Army of God to be heroes for their actions.
In 2009, said the person, who spoke on the condition of anonymity out of concerns for the privacy of the family, Mr. Dear described as “heroes” members of the Army of God, a loosely organized group of anti-abortion extremists that has claimed responsibility for a number of killings and bombings.
As ISIS did with the San Bernardino attack, the Army of God hailed the Planned Parenthood attack.
Robert Lewis Dear aside, Planned Parenthood murders helpless preborn children. These murderous pigs at Planned Parenthood are babykillers and they reap what they sow. In this case, Planned Parenthood selling of aborted baby parts came back to bite them.
Dear was very active online, so it is not unreasonable to wonder whether he had reached out in the interim period to the group or consulted their how-to resources. But you’re not going to find those ties unless you look for them, and series of localized murder trials are far less likely to do that than an FBI enterprise investigation.
The FBI doesn’t entirely ignore attacks on reproductive health clinics. Indeed, it issued a threat assessment predicting increased targeting of clinics in September. Would a more focused enterprise investigation into Army of God before the Planned Parenthood attack have prevented it?
Frankly, as Hennessey says, there’s a balancing of civil liberties that goes on. And it may be that the number of deaths we suffer from non-Islamic “ideologically-motivated mass shootings” hits that sweet spot of the number of deaths we’ll tolerate given the risks to civil liberties (or — as I argued at Salon — it may be that because we suffer so many non-Islamic “ideologically-motivated mass shootings” and non-Ideological mass shootings, we need to develop another approach to combat them).
But under the current system, the victims of Islamic “ideologically-motivated mass shootings” are treated as more important deaths than all the others (which almost certainly inflates the import of them and thereby feeds more terror). All American mass deaths, ideologically-motivated, Islamic or not, deserve the same access to justice (or chance of prevention). And all Americans, whether they worship in a church or a mosque or a library, deserve the same protection for their First Amendment rights.
Update: As I noted above, Hennessey has replied to my piece. She expands on this sentence:
It is also the case that Muslim populations have been disproportionately impacted by foreign-specific material support laws.
To this discussion, to make it far, far more clear that she recognizes there is a difference.
In fact, I actually do believe that Muslims are disparately impacted by terrorism laws. Indeed, in my piece I make this point expressly with respect to material support laws. Furthermore, whatever the legal distinctions between homegrown violent terrorists and domestic terrorists—domestic actors with no contact with foreign groups who may or may not be inspired by foreign terrorist ideology—the law certainly applies dramatically different consequences to foreign terrorist organizations and international terrorists who commit crimes in coordination with those organizations. The FBI can pose as Al Qaeda or ISIS operatives and trick a homegrown violent extremist into becoming an international terrorist based on contact with wholly fictitious terrorists. Walk that out to include crimes of attempt and material support, as Wheeler notes, and the disparate application is reflected in the prosecution numbers.
She then shows the results of her research to find several more white people charged as terrorists (notably McVeigh; I don’t contest that if we go back far enough in time before 9/11 we could find loads of white people charged as terrorists, and rightly so).
But her treatment of Rudolph reinforces my point.
Rudolph is a puzzling case, because the government declined to even indict on terrorism charges that would seem to have been clearly available. But while Rudolph was not charged as a terrorists, federal authorities had long publically referred to him as just that. In a statement following Rudolph’s arrest then Attorney General John Ashcroft called Rudolph’s crimes “terrorist attacks” outright.
First, the fact that Ashcroft calls Rudolph’s attacks terrorist attacks, but does not call him a terrorist, precisely stops short of calling a white man a terrorist. More importantly, Hennessey has spent two articles talking about terrorism being a legal distinction, specifically backing off what people get called.
There is an element of truth to this as a matter of media vocabulary, and certainly there are those in right-wing corners of the media who are quick to call terrorism any act of violence perpetrated by someone from an Arab or Muslim country.
But if we’re going to measure what people get called, then her gun/bomb distinction breaks down. Because many of the Muslims attacking with guns get called terrorists by the Feds (though they generally did not with Nidal Hasan, which adds the element of military targeting).
And all of this comes back to her initial point, with which I agree: this is about investigation. And the reality is, regardless of what it called him, the government treated Rudolph (and Harpham) as a lone wolf, not as a person in the network that he was in. One reason fewer white ideological terrorists get charged with terrorism is because until you do that investigation, you may not find the network, especially since the chances it will be sitting in an FBI server are much lower because of the different standards for collecting data. And, in the case of Frazier Glenn Miller, you may not prevent deaths you might have.
The FBI has dedicated 400 people to investigating what motivated the San Bernardino attackers because it is clear they were radicalized but their actual ties to foreign terrorists are not yet. That’s a focus on identifying foreign and US-based networks that rarely happens with white ideological violence, and as a result it doesn’t get approached systematically.
Democrats and Republicans do not agree that waterboarding to capture terrorists was a crime, but many do agree it was a blunder.
That’s the central wisdom offered by Eli Lake, in a piece arguing against a Human Rights Watch report calling on renewed accountability for torture based on the evidence presented in the Senate Torture Report.
It’s a bit of a muddle. Obviously, Lake’s reference to waterboarding invokes the understanding of torture prior to the SSCI Report, which revealed far more than waterboarding, including anal rape masquerading as rectal feeding. If there’s a consensus he’s defending, it’s a consensus about waterboarding and “rectal feeding.”
By the end of his piece, he argues both that his claimed consensus is breaking down, and that it still holds — though here, again, he’s focusing on waterboarding, not the anal rape that’s also at issue.
At the end of the Obama administration, that bipartisan consensus is beginning to erode. In 2008, both the Democratic (Obama) and Republican (Senator John McCain) candidates opposed torture and favored closing Guantanamo. In 2015 Donald Trump has come out enthusiastically for waterboarding, pledging to authorize its use again if elected president. Carly Fiorina has defended waterboarding, saying it yielded valuable intelligence, and Jeb Bush has said he is open to repealing the ban on torture imposed by Obama.
Nonetheless other Republicans have held a firmer line. Both Ted Cruz and Rand Paul voted for the anti-torture amendment this summer. Many progressives hope this bipartisan opposition to torture can hold together after Obama leaves office. But this consensus will break apart if a foreign court prosecutes George W. Bush for a crime Barack Obama has long considered a blunder.
Key to understanding Lake’s call to hold off on investigating the torturers, though, is that “anti-torture amendment” that Cruz and Paul support but Carly and Trump might not. Here’s how HRW describes the amendment — which is a call to adhere to the Army Field Manual — in its report.
On June 16, 2015, the US Senate passed an amendment proposed by senators John McCain and Dianne Feinstein to a defense spending bill (the National Defense Authorization Act for Fiscal Year 2016) that if it becomes law, could codify much of what is in Obama’s executive order 13491. The amendment passed in the Senate by a vote of 78-21. The entire bill was then vetoed by Obama over other issues, but a similar provision remained in the compromised version bill which, as of this writing, was expected to be signed into law by the President. It provides that any individual detained by the US in an armed conflict can only be interrogated in ways outlined by the US Army Field Manual on Intelligence Interrogations. It also requires review and updating of the manual within three years to ensure that it reflects current best practice and complies with all US legal obligations and requires that the International Committee of the Red Cross get “notification of, and prompt” access to, all prisoners held by the US in any armed conflict. It is already clear under US law that torture and other ill-treatment is illegal but this requirement would help to more specifically restrain the physical action certain US interrogators could take. However, it is also impossible to know for sure how future administrations will interpret its obligations under the provisions. Additionally, an exemption for the FBI, the Department of Homeland Security, and other federal “law enforcement entities” was added to the compromised version of the bill.
That is, the amendment actually defers the review of techniques in the AFM to the next Administration, potentially a Cruz or Paul one, and doesn’t apply to the FBI.
As I and–especially–Jeff Kaye have pointed out, however, so long as the AFM has Appendix M in it, it can’t be considered a reliable guard against torture. Here’s part of what Kaye had to say about the watered down form in which the amendment was passed.
In what Democratic Senator Dianne Feinstein called a “minor” change to the National Defense Authorization Act (NDAA), a mandated review of the Army Field Manual (AFM) on interrogation was moved from one year to three years from now.
According to a “Q&A” at Human Rights First last June, the mandated review of the AFM was part of the McCain-Feinstein amendment to the NDAA, and was meant “to ensure that its interrogation approaches are lawful, humane, and based on the most up-to-date science.”
The fact there was any “review” at all was really a response to criticism from the United Nation’s Committee Against Torture, which demanded a review of the AFM’s Appendix M, which has been long criticized as allowing abusive interrogation techniques, including isolation, sleep deprivation, and sensory deprivation.
While it is a good thing that waterboarding and other SERE-derived forms of torture are not to be allowed anymore — and they were part of an experimental program in any case — long-standing forms of torture are now protected by law because they are part of the Army Field Manual itself.
When the pre-veto version of the NDAA was passed — the version that made the Army Field Manual on interrogation literally the law of the land — all the liberals and human rights groups stood up and applauded. None of them mentioned that only months before the UN had criticized the document for use of abusive techniques, and in particular the use of isolation, and sleep and sensory deprivation noted above. Not one.
So what we have now — what Lake would like to uphold — is a deferral of the issue to a potential Republican Administration. That’s not actually a consensus preventing torture at all .
Along the way to Lake’s conclusion showing any consensus against torture isn’t really a consensus against torture, he does cite to some people — Jack Goldsmith (prior to the report, though I suspect he’d still say the same, even though I’m not sure Americans would be as supportive of “rectal feeding” as of a whitewashed description of waterboarding), Glenn Carle, Raha Wala — who oppose reopening the torture question inside the United States. Yet along the way Lake keeps dodging DOJ’s approach to it.
Part of the problem for Human Rights Watch is that the Justice Department has already investigated cases where CIA officers went beyond the legal guidelines, and ended this probe in 2012 without pursuing prosecutions. Pitter pointed out that the federal prosecutor in this case, John Durham, has acknowledged that there were limitations on the evidence available to his team. Nonetheless, the Justice Department has not taken up the issue again.
DOJ has not taken up the issue again because it has refused to open the Torture Report. DOJ can’t very well consider the additional evidence (on top of talking to victims, which HRW did for its report) in the report so long as it doesn’t open it.
Which actually supports HRW’s point: there’s a conspiracy to cover up this torture, and given that it won’t be investigated here, other countries have an obligation to do so.
I actually think Lake misses a way to make his muddled argument much stronger. For one, I think there might be more consensus, blindly defending the US, if a foreign court started prosecuting the US for torture. If HRW gets its way — and foreign governments investigate torture — you’ll see a lot more agreement that the US shouldn’t have to submit to the review of other countries.
But I actually think the fact the anti-prosecution consensus is now defending anal rape and not just waterboarding is key. If we discussed the anal rape as such — as HRW does — it becomes a lot harder to defend (though there is admittedly far too much public tolerance of rape in criminal prisons in this country, to say nothing of Gitmo, to believe more candid discussion that this was really always about rape would sway the public).
The CIA also used “rectal rehydration” or “rectal feeding” which, as described in the Senate Summary, would amount to sexual assault, on at least five different detainees. The practice, not known to have been authorized by the OLC, involved inserting pureed food or liquid nutrients into the detainee’s rectum through a tube, presumably without his consent.The CIA claims this was a medically necessary procedure and not an “enhanced interrogation technique.” The Senate Summary, however, states the procedure was done “without evidence of medical necessity.” Medical experts report that use of this type of procedure without evidence of medical necessity is “a form of sexual assault masquerading as medical treatment.” At least three other detainees were threatened with “rectal rehydrations.” Allegations of excessive force used on two detainees during rectal exams to do not appear to have been properly investigated. One of those two detainees, Mustafa al-Hawsawi, was later diagnosed with chronic hemorrhoids, an anal fissure, and symptomaticrectal prolapse. Some CIA detainees have also reported having suppositories forced into their anus, and other detainees have reported CIA operatives sticking fingers in their anus.
But once you defend anal rape in the terms CIA and its supporters do — that obviously bogus claim that it served as feeding or rehydration — you quickly get to an ongoing practice that is often contraindicated by medical necessity but used for coercion: forced feeding at Gitmo. Excruciating nasal feeding, rather than excruciating rectal feeding.
Here’s what documents submitted in Abu Wa’el Dhiab’s bid lat year to halt his own forced-feeding revealed.
[T]hese documents reveal that back on May 7, one of the government’s primary rebuttals to claims about the conditions under which Dhiab was force fed last year was not to refute those claims, but rather to claim he had no standing to complain because he was not — at that point — being force fed. Only 6 days later Gitmo cleared Dhiab to be force fed.
Underlying this discussion is Dhiab’s claim that the government has made the standards for force feeding arbitrary so as to be able to subject those detainees leading force feeding campaigns to painful treatment to get them to stop.
To substantiate that argument, the memorandum unsealed on Friday lays out the changes made to Gitmo’s force feeding protocol in November and December. Those changes include:
- Deletion of limits on the speed at which detainees could be force fed
- Elimination of guidelines on responding to complaints about speed of force feeding
- Change of weight monitoring from daily to weekly
- Deletion of chair restraint guidelines (DOD made a special SOP to cover restraint chair they have thus far refused to turn over)
- Expansion of scenarios in which prisoners can be force fed, including those at 85% of ideal body weight (IBW)
- Deletion of provisions against on-off force feeding
- Discontinuation of use of Reglan (this has to do with potentially permanent side effects from the drug)
- Replacement of phrase “hunger strike” with phrase “medical management of detainees with weight loss”
In response, the government argued (at a time Dhiab was not eating but before they put him on the force feeding list) that he didn’t have standing because he had not been force fed for 2 months.
That is, Dhiab argued compellingly that force-feeding as it sometimes occurs at Gitmo is about coercion through pain, not about medical necessity.
Particularly during periods of broad hunger striking in Gitmo, it hasn’t been (primarily) about feeding prisoners who don’t want to eat. It has been about breaking resistance.
Along with Appendix M, the force-feeding practices at Gitmo are another thing the UN objected to last year.
And while Dhiab has been released, the 75-pound Tariq Ba Odah remains on hunger strike, though the Obama Administration still claims the authority to detain him (Odah has been cleared for release since 2010) and force-feed him, even though years of the process have created severe medical problems with doing so.
On this issue — the use of torturous techniques to coerce submission — I absolutely agree with Lake there is consensus. While some — including Dianne Feinstein and Gladys Kessler (who has seen videos of the process) oppose it — we’re not seeing any legislation to stop the practice and the Executive continues to insist it has absolute discretion in treatment of detainees at Gitmo so long as it is willing to claim it’s doing so for their own good, however dubious those claims may appear. That’s true, in part, because Democrats don’t want to discomfit their president.
And so, in the end, I agree with Lake that there is a consensus in DC. I’d even argue it’s nowhere near as fragile as he suggests by the end of his piece.
But I’d also argue the consensus that it is okay to nasally or rectally “feed” human beings — in some cases, for years — so long as you can excuse the obviously coerced submission involved with a claim of medical necessity is precisely why others should intervene. Lake may be right that there’s a consensus saying “rectal feeding” shouldn’t be prosecuted, but that doesn’t mean that consensus is defensible.
For days now, surveillance hawks have been complaining that terrorists probably used encryption in their attack on Paris last Friday. That, in spite of the news that authorities used a phone one of the attackers threw in a trash can to identify a hideout in St. Denis (this phone in fact might have been encrypted and brute force decrypted, but given the absence of such a claim and the quick turnaround on it, most people have assumed both it and the pre-attack chats on it were not encrypted).
I suspect we’ll learn attackers did use encryption (and a great deal of operational security that has nothing to do with encryption) at some point in planning their attack — though the entire network appears to have been visible through metadata and other intelligence. Thus far, however, there’s only one way we know of that the terrorists used encryption leading up to the attack: when one of them paid for things like a hotel online, the processing of his credit card (which was in his own name) presumably took place over HTTPS (hat tip to William Ockham for first making that observation). So if we’re going to blindly demand we prohibit the encryption the attackers used, we’re going to commit ourselves to far far more hacking of online financial transactions.
I’m more interested in the concerns about terrorists’ claimed use of PlayStation 4. Three days before the attack, Belgium’s Interior Minister, said all countries were having problem with PlayStation 4s, which led to a frenzy mistakenly claiming the Paris terrorists had used it (there’s far more reason to believe they used Telegram).
One of those alternatives was highlighted on Nov. 11, when Belgium’s federal home affairs minister, Jan Jambon, said that a PlayStation 4 (PS4) console could be used by ISIS to communicate with their operatives abroad.
“PlayStation 4 is even more difficult to keep track of than WhatsApp,” said Jambon, referencing to the secure messaging platform.
Earlier this year, Reuters reported that a 14-year-old boy from Austria was sentenced to a two-year jail term after he downloaded instructions on bomb-building onto his Playstation games console, and was in contact with ISIS.
It remains unclear, however, how ISIS would have used PS4s, though options range from the relatively direct methods of sending messages to players or voice-chatting, to more elaborate methods cooked up by those who play games regularly. Players, for instance, can use their weapons during a game to send a spray of bullets onto a wall, spelling out whole sentences to each other.
This has DiFi complaining that Playstation is encrypted.
Even Playstation is encrypted. It’s very hard to get the data you need because it’s encrypted
Thus far, it’s not actually clear most communications on Playstation are encrypted (though players may be able to pass encrypted objects about); most people I’ve asked think the communications are not encrypted, though Sony isn’t telling. What is likely is that there’s not an easy way to collect metadata tracking the communications within games, which would make it hard to collect on whether or not some parts of the communications data are encrypted.
But at least one kind of data on Playstations — probably two — is encrypted: Credit cards and (probably) user data. That’s because 4 years ago, Playstation got badly hacked.
“The entire credit card table was encrypted and we have no evidence that credit card data was taken,” said Sony.
This is the slimmest amount of good news for PlayStation Network users, but it alone raises very serious concerns, since Sony has yet to provide any details on what sort of encryption has been used to protect that credit card information.
As a result, PlayStation Network users have absolutely no idea how safe their credit card information may be.
But the bad news keeps rolling in:
“The personal data table, which is a separate data set, was not encrypted,” Sony notes, “but was, of course, behind a very sophisticated security system that was breached in a malicious attack.”
A very sophisticated security system that ultimately failed, making it useless.
Why Sony failed to encrypt user account data is a question that security experts have already begun to ask. Along with politicians both in the United States and abroad.
Chances are Sony’s not going to have an answer that’s going to please anyone.
After one in a series of really embarrassing hacks, I assume Sony has locked things down more since. Three years after that Playstation hack, of course, Sony’s movie studio would be declared critical infrastructure after it also got hacked.
Here’s the thing: Sony is the kind of serially negligent company that we need to embrace good security if the US is going to keep itself secure. We should be saying, “Encrypt away, Sony! Please keep yourself safe because hackers love to hack you and they’ve had spectacular success doing so! Jolly good!”
But we can’t, at the same time, be complaining that Sony offers some level of encryption as if that makes the company a material supporter of terrorism. Sony is a perfect example of how you can’t have it both ways, secure against hackers but not against wiretappers.
Amid the uproar about terrorists maybe using encryption, the ways they may have — to secure online financial transactions and game player data — should be a warning about condemning encryption broadly.
Because next week, when hackers attack us, we’ll be wishing our companies had better encryption to keep us safe.
Update: Thought I’d put a list of Senators people should thank for voting against CISA.
GOP: Crapo, Daines, Heller, Lee, Risch, and Sullivan. (Paul voted against cloture but did not vote today.)
Dems: Baldwin, Booker, Brown, Cardin, Coons, Franken, Leahy, Markey, Menendez, Merkley, Sanders, Tester, Udall, Warren, Wyden
Just now, the Senate voted to pass the Cyber Information Sharing Act by a vote of 74 to 21. While 7 more people voted against the bill than had voted against cloture last week (Update: the new votes were Cardin and Tester, Crapo, Daines, Heller, Lee, Risch, and Sullivan, with Paul not voting), this is still a resounding vote for a bill that will authorize domestic spying with no court review in this country.
The amendment voting process was interesting of its own accord. Most appallingly, just after Patrick Leahy cast his 15,000th vote on another amendment — which led to a break to talk about what a wonderful person he is, as well as a speech from him about how the Senate is the conscience of the country — Leahy’s colleagues voted 57 to 39 against his amendment that would have stopped the creation of a new FOIA exemption for CISA. So right after honoring Leahy, his colleagues kicked one of his key issues, FOIA, in the ass.
More telling, though, were the votes on the Wyden and Heller amendments, the first two that came up today.
Wyden’s amendment would have required more stringent scrubbing of personal data before sharing it with the federal government. The amendment failed by a vote of 55-41 — still a big margin, but enough to sustain a filibuster. Particularly given that Harry Reid switched votes at the last minute, I believe that vote was designed to show enough support for a better bill to strengthen the hand of those pushing for that in conference (the House bills are better on this point). The amendment had the support of a number of Republicans — Crapo, Daines, Gardner, Heller, Lee, Murkowksi, and Sullivan — some of whom would vote against passage. Most of the Democrats who voted against Wyden’s amendment — Carper, Feinstein, Heitkamp, Kaine, King, Manchin, McCaskill, Mikulski, Nelson, Warner, Whitehouse — consistently voted against any amendment that would improve the bill (and Whitehouse even voted for Tom Cotton’s bad amendment).
The vote on Heller’s amendment looked almost nothing like Wyden’s. Sure, the amendment would have changed just two words in the bill, requiring the government to have a higher standard for information it shared internally. But it got a very different crowd supporting it, with a range of authoritarian Republicans like Barrasso, Cassidy, Enzi, Ernst, and Hoeven — voting in favor. That made the vote on the bill much closer. So Reid, along with at least 7 other Democrats who voted for Wyden’s amendment, including Brown, Klobuchar, Murphy, Schatz, Schumer, Shaheen, and Stabenow, voted against Heller’s weaker amendment. While some of these Democrats — Klobuchar, Schumer, and probably Shaheen and Stabenow — are affirmatively pro-unconstitutional spying anyway, the swing, especially from Sherrod Brown, who voted against the bill as a whole, makes it clear that these are opportunistic votes to achieve an outcome. Heller’s vote fell just short 49-47, and would have passed had some of those Dems voted in favor (the GOP Presidential candidates were not present, but that probably would have been at best a wash and possibly a one vote net against, since Cruz voted for cloture last week). Ultimately, I think Reid and these other Dems are moving to try to deliver something closer to what the White House wants, which is still unconstitutional domestic spying.
Richard Burr seemed certain that this will go to conference, which means people like he, DiFi, and Tom Carper will try to make this worse as people from the House point out that there are far more people who oppose this kind of unfettered spying in the House. We shall see.
For now, however, the Senate has embraced a truly awful bill.
Update, all amendment roll calls
Cotton amendment: 22-73-5
Final passage: 74-21-5
As I noted in my argument that CISA is designed to do what NSA and FBI wanted an upstream cybersecurity certificate to do, but couldn’t get FISA to approve, there’s almost no independent oversight of the new scheme. There are just IG reports — mostly assessing the efficacy of the information sharing and the protection of classified information shared with the private sector — and a PCLOB review. As I noted, history shows that even when both are well-intentioned and diligent, that doesn’t ensure they can demand fixes to abuses.
So I’m interested in what Richard Burr and Dianne Feinstein did with Jon Tester’s attempt to improve the oversight mandated in the bill.
The bill mandates three different kinds of biennial reports on the program: detailed IG Reports from all agencies to Congress, which will be unclassified with a classified appendix, a less detailed PCLOB report that will be unclassified with a classified appendix, and a less detailed unclassified IG summary of the first two. Note, this scheme already means that House members will have to go out of their way and ask nicely to get the classified appendices, because those are routinely shared only with the Intelligence Committee.
Tester had proposed adding a series of transparency measures to the first, more detailed IG Reports to obtain more information about the program. Last week, Burr and DiFi rolled some transparency procedures loosely resembling Tester’s into the Manager’s amendment — adding transparency to the base bill, but ensuring Tester’s stronger measures could not get a vote. I’ve placed the three versions of transparency provisions below, with italicized annotations, to show the original language, Tester’s proposed changes, and what Burr and DiFi adopted instead.
Comparing them reveals Burr and DiFi’s priorities — and what they want to hide about the implementation of the bill, even from Congress.
Tester proposed a measure that would require reporting on how often CISA data gets used for law enforcement. There were two important aspects to his proposal: it required reporting not just on how often CISA data was used to prosecute someone, but also how often it was used to investigate them. That would require FBI to track lead sourcing in a way they currently refuse to. It would also create a record of investigative source that — in the unlikely even that a defendant actually got a judge to support demands for discovery on such things — would make it very difficult to use parallel construction to hide CISA sourced data.
In addition, Tester would have required some granularity to the reporting, splitting out fraud, espionage, and trade secrets from terrorism (see clauses VII and VIII). Effectively, this would have required FBI to report how often it uses data obtained pursuant to an anti-hacking law to prosecute crimes that involve the Internet that aren’t hacking; it would have required some measure of how much this is really about bypassing Title III warrant requirements.
Burr and DiFi replaced that with a count of how many prosecutions derived from CISA data. Not only does this not distinguish between hacking crimes (what this bill is supposed to be about) and crimes that use the Internet (what it is probably about), but it also would invite FBI to simply disappear this number, from both Congress and defendants, by using parallel construction to hide the CISA source of this data.
Tester also asked for reporting (see clause V) on how often personal information or information identifying a specific person was shared when it was not “necessary to describe or mitigate a cybersecurity threat or security vulnerability.” The “necessary to describe or mitigate” is quite close to the standard NSA currently has to meet before it can share US person identities (the NSA can share that data if it’s necessary to understand the intelligence; though Tester’s amendment would apply to all people, not just US persons).
But Tester’s standard is different than the standard of sharing adopted by CISA. CISA only requires agencies to strip personal data if the agency if it is “not directly related to a cybersecurity threat.” Of course, any data collected with a cybersecurity threat — even victim data, including the data a hacker was trying to steal — is “related to” that threat.
Burr and DiFi changed Tester’s amendment by first adopting a form of a Wyden amendment requiring notice to people whose data got shared in ways not permitted by the bill (which implicitly adopts that “related to” standard), and then requiring reporting on how many people got notices, which will only come if the government affirmatively learns that a notice went out that such data wasn’t related but got shared anyway. Those notices are almost never going to happen. So the number will be close to zero, instead of the probably 10s of thousands, at least, that would have shown under Tester’s measure.
So in adopting this change, Burr and DiFi are hiding the fact that under CISA, US person data will get shared far more promiscuously than it would under the current NSA regime.
Tester also would have required the government to report how much person data got stripped by DHS (see clause IV). This would have measured how often private companies were handing over data that had personal data that probably should have been stripped. Combined with Tester’s proposed measure of how often data gets shared that’s not necessary to understanding the indicator, it would have shown at each stage of the data sharing how much personal data was getting shared.
Burr and DiFi stripped that entirely.
Tester would also have required reporting on how often defensive measures (the bill’s euphemism for countermeasures) cause known harm (see clause VI). This would have alerted Congress if one of the foreseeable harms from this bill — that “defensive measures” will cause damage to the Internet infrastructure or other companies — had taken place.
Burr and DiFi stripped that really critical measure.
Finally, Tester would have required reporting on how many indicators came in through DHS (clause I), how many came in through civilian agencies like FBI (clause II), and how many came in through military agencies, aka NSA (clause III). That would have provided a measure of how much data was getting shared in ways that might bypass what few privacy and oversight mechanisms this bill has.
Burr and DiFi replaced that with a measure solely of how many indicators get shared through DHS, which effectively sanctions alternative sharing.
That Burr and DiFi watered down Tester’s measures so much makes two things clear. First, they don’t want to count some of the things that will be most important to count to see whether corporations and agencies are abusing this bill. They don’t want to count measures that will reveal if this bill does harm.
Most importantly, though, they want to keep this information from Congress. This information would almost certainly not show up to us in unclassified form, it would just be shared with some members of Congress (and on the House side, just be shared with the Intelligence Committee unless someone asks nicely for it).
But Richard Burr and Dianne Feinstein want to ensure that Congress doesn’t get that information. Which would suggest they know the information would reveal things Congress might not approve of.
I’ve been wracking my brain to understand why the Intel Community has been pushing CISA so aggressively.
I get why the Chamber of Commerce is pushing it: because it sets up a regime under which businesses will get broad regulatory immunity in exchange for voluntarily sharing their customers’ data, even if they’re utterly negligent from a security standpoint, while also making it less likely that information their customers could use to sue them would become public. For the companies, it’s about sharply curtailing the risk of (charitably) having imperfect network security or (more realistically, in some cases) being outright negligent. CISA will minimize some of the business costs of operating in an insecure environment.
But why — given that it makes it more likely businesses will wallow in negligence — is the IC so determined to have it, especially when generalized sharing of cyber threat signatures has proven ineffective in preventing attacks, and when there are far more urgent things the IC should be doing to protect themselves and the country?
Richard Burr and Dianne Feinstein’s move the other day to — in the guise of ensuring DHS get to continue to scrub data on intake, instead give the rest of the IC veto power over that scrub (which almost certainly means the bill is substantially a means of eliminating the privacy role DHS currently plays) — leads me to believe the IC plans to use this as they might have used (or might be using) a cyber certification under upstream 702.
Since NYT and ProPublica caught up to my much earlier reporting on the use of upstream 702 for cyber, people have long assumed that CISA would work with upstream 702 authority to magnify the way upstream 702 works. Jonathan Mayer described how this might work.
This understanding of the NSA’s domestic cybersecurity authority leads to, in my view, a more persuasive set of privacy objections. Information sharing legislation would create a concerning surveillance dividend for the agency.
Because this flow of information is indirect, it prevents businesses from acting as privacy gatekeepers. Even if firms carefully screen personal information out of their threat reports, the NSA can nevertheless intercept that information on the Internet backbone.
Note that Mayer’s model assumes the Googles and Verizons of the world make an effort to strip private information, then NSA would use the signature turned over to the government under CISA to go get the private information just stripped out. But Mayer’s model — and the ProPublica/NYT story — never considered how the 2011 John Bates ruling on upstream collection might hinder that model, particularly as it pertains to domestically collected data.
As I laid out back in June, NSA’s optimistic predictions they’d soon get an upstream 702 certificate for cyber came in the wake of John Bates’ October 3, 2011 ruling that the NSA had illegally collected US person data. Of crucial importance, Bates judged that data obtained in response to a particular selector was intentionally, not incidentally, collected (even though the IC and its overseers like to falsely claim otherwise), even data that just happened to be collected in the same transaction. Crucially, pointing back to his July 2010 opinion on the Internet dragnet, Bates said that disclosing such information, even just to the court or internally, would be a violation of 50 USC 1809(a), which he used as leverage to make the government identify and protect any US person data collected using upstream collection before otherwise using the data. I believe this decision established a precedent for upstream 702 that would make it very difficult for FISC to permit the use of cyber signatures that happened to be collected domestically (which would count as intentional domestic collection) without rigorous minimization procedures.
The government, at a time when it badly wanted a cyber certificate, considered appealing his decision, but ultimately did not. Instead, they destroyed the data they had illegally collected and — in what was almost certainly a related decision — destroyed all the PATRIOT-authorized Internet dragnet data at the same time, December 2011. Bates did permit the government to keep collecting upstream data, but only under more restrictive minimization procedures.
Neither ProPublica/NYT nor Mayer claimed NSA had obtained an upstream cyber certificate (though many other people have assumed it did). We actually don’t know, and the evidence is mixed.
Even as the government was scrambling to implement new upstream minimization procedures to satisfy Bates’ order, NSA had another upstream violation. That might reflect informing Bates, for the first time (there’s no sign they did inform him during the 2011 discussion, though the 2011 minimization procedures may reflect that they already had), they had been using upstream to collect on cyber signatures, or one which might represent some other kind of illegal upstream collection. When the government got Congress to reauthorize FAA that year, it did not inform them they were using or intended to use upstream collection to collect cyber signatures. Significantly, even as Congress began debating FAA, they considered but rejected the first of the predecessor bills to CISA.
My guess is that the FISC did approve cyber collection, but did so with some significant limitations on it, akin to, or perhaps even more restrictive, than the restrictions on multiple communication transactions (MCTs) required in 2011. I say that, in part, because of language in USA F-ReDux (section 301) permitting the government to use information improperly collected under Section 702 if the FISA Court imposed new minimization procedures. While that might have just referred back to the hypothetical 2011 example (in which the government had to destroy all the data), I think it as likely the Congress was trying to permit the government to retain data questioned later.
Additionally, nothing in these procedures shall restrict NSA’s ability to conduct vulnerability or network assessments using information acquired pursuant to section 702 of the Act in order to ensure that NSA systems are not or have not been compromised. Notwithstanding any other section in these procedures, information used by NSA to conduct vulnerability or network assessments may be retained for one year solely for that limited purpose. Any information retained for this purpose may be disseminated only in accordance with the applicable provisions of these procedures.
That is, the FISC approved new procedures that permit the retention of vulnerability information for use domestically, but it placed even more restrictions on it (retention for just one year, retention solely for the defense of that agency’s network, which presumably prohibits its use for criminal prosecution, not to mention its dissemination to other agencies, other governments, and corporations) than it had on MCTs in 2011.
To be sure, there is language in both 2011 and 2014 NSA MPs that permits the agency to retain and disseminate domestic communications if it is necessary to understand a communications security vulnerability.
the communication is reasonably believed to contain technical data base information, as defined in Section 2(i), or information necessary to understand or assess a communications security vulnerability. Such communication may be provided to the FBI and/or disseminated to other elements of the United States Government. Such communications may be retained for a period sufficient to allow a thorough exploitation and to permit access to data that are, or are reasonably believed likely to become, relevant to a current or future foreign intelligence requirement. Sufficient duration may vary with the nature of the exploitation.
But at least on its face, that language is about retaining information to exploit (offensively) a communications vulnerability. Whereas the more recent language — which is far more restrictive — appears to address retention and use of data for defensive purposes.
The 2011 ruling strongly suggested that FISC would interpret Section 702 to prohibit much of what Mayer envisioned in his model. And the addition to the 2014 minimization procedures leads me to believe FISC did approve very limited use of Section 702 for cyber security, but with such significant limitations on it (again, presumably stemming from 50 USC 1809(a)’s prohibition on disclosing data intentionally collected domestically) that the IC wanted to find another way. In other words, I suspect NSA (and FBI, which was working closely with NSA to get such a certificate in 2012) got their cyber certificate, only to discover it didn’t legally permit them to do what they wanted to do.
And while I’m not certain, I believe that in ensuring that DHS’ scrubs get dismantled, CISA gives the IC a way to do what it would have liked to with a FISA 702 cyber certificate.
Let’s go back to Mayer’s model of what the IC would probably like to do: A private company finds a threat, removes private data, leaving just a selector, after which NSA deploys the selector on backbone traffic, which then reproduces the private data, presumably on whatever parts of the Internet backbone NSA has access to via its upstream selection (which is understood to be infrastructure owned by the telecoms).
But in fact, Step 4 of Mayer’s model — NSA deploys the signature as a selector on the Internet backbone — is not done by the NSA. It is done by the telecoms (that’s the Section 702 cooperation part). So his model would really be private business > DHS > NSA > private business > NSA > treatment under NSA’s minimization procedures if the data were handled under upstream 702. Ultimately, the backbone operator is still going to be the one scanning the Internet for more instances of that selector; the question is just how much data gets sucked in with it and what the government can do once it gets it.
And that’s important because CISA codifies private companies’ authority to do that scan.
For all the discussion of CISA and its definition, there has been little discussion of what might happen at the private entities. But the bill affirmatively authorizes private entities to monitor their systems, broadly defined, for cybersecurity purposes.
(a) AUTHORIZATION FOR MONITORING.—
(1) IN GENERAL.—Notwithstanding any other provision of law, a private entity may, for cybersecurity purposes, monitor—
(A) an information system of such private entity;
(B) an information system of another entity, upon the authorization and written consent of such other entity;
(C) an information system of a Federal entity, upon the authorization and written consent of an authorized representative of the Federal entity; and
(D) information that is stored on, processed by, or transiting an information system monitored by the private entity under this paragraph.
(2) CONSTRUCTION.—Nothing in this subsection shall be construed—
(A) to authorize the monitoring of an information system, or the use of any information obtained through such monitoring, other than as provided in this title; or
(B) to limit otherwise lawful activity.
Defining monitor this way:
(14) MONITOR.—The term ‘‘monitor’’ means to acquire, identify, or scan, or to possess, information that is stored on, processed by, or transiting an information system.
That is, CISA affirmatively permits private companies to scan, identify, and possess cybersecurity threat information transiting or stored on their systems. It permits private companies to conduct precisely the same kinds of scans the government currently obligates telecoms to do under upstream 702, including data both transiting their systems (which for the telecoms would be transiting their backbone) or stored in its systems (so cloud storage). To be sure, big telecom and Internet companies do that anyway for their own protection, though this bill may extend the authority into cloud servers and competing tech company content that transits the telecom backbone. And it specifically does so in anticipation of sharing the results with the government, with very limited requirement to scrub the data beforehand.
Thus, CISA permits the telecoms to do the kinds of scans they currently do for foreign intelligence purposes for cybersecurity purposes in ways that (unlike the upstream 702 usage we know about) would not be required to have a foreign nexus. CISA permits the people currently scanning the backbone to continue to do so, only it can be turned over to and used by the government without consideration of whether the signature has a foreign tie or not. Unlike FISA, CISA permits the government to collect entirely domestic data.
Of course, there’s no requirement that the telecoms scan for every signature the government shares with it and share the results with the government. Though both Verizon and AT&T have a significant chunk of federal business — which just got put out for rebid on a contract that will amount to $50 billion — and they surely would be asked to scan the networks supporting federal traffic for those signatures (remember, this entire model of scanning domestic backbone traffic got implicated in Qwest losing a federal bid which led to Joe Nacchio’s prosecution), so they’ll be scanning some part of the networks they operate with the signatures. CISA just makes it clear they can also scan their non-federal backbone as well if they want to. And the telecoms are outspoken supporters of CISA, so we should presume they plan to share promiscuously under this bill.
Assuming they do so, CISA offers several more improvements over FISA.
First — perhaps most important for the government — there are no pesky judges. The FISC gets a lot of shit for being a rubber stamp, but for years judges have tried to keep the government operating in the vicinity of the Fourth Amendment through its role in reviewing minimization procedures. Even John Bates, who was largely a pushover for the IC, succeeded in getting the government to agree that it can’t disseminate domestic data that it intentionally collected. And if I’m right that the FISC gave the government a cyber certificate but sharply limited how it could use that data, then it did so on precisely this issue. Significantly, CISA continues a trend we already saw in USA F-ReDux, wherein the Attorney General gets to decide whether privacy procedures (no longer named minimization procedures!) are adequate, rather than a judge. Equally significant, while CISA permits the use of CISA-collected data for a range of prosecutions, unlike FISA, it requires no notice to defendants of where the government obtained that data.
In lieu of judges, CISA envisions PCLOB and Inspectors General conducting the oversight (as well as audits being possible though not mandated). As I’ll show in a follow-up post, there are some telling things left out of those reviews. Plus, the history of DOJ’s Inspector General’s efforts to exercise oversight over such activities offers little hope these entities, no matter how well-intentioned, will be able to restrain any problematic practices. After all, DOJ’s IG called out the FBI in 2008 for not complying with a 2006 PATRIOT Act Reauthorization requirement to have minimization procedures specific to Section 215, but it took until 2013, with three years of intercession from FISC and leaks from Edward Snowden, before FBI finally complied with that 2006 mandate. And that came before FBI’s current practice of withholding data from its IG and even some information in IG reports from Congress.
In short, given what we know of the IC’s behavior when there was a judge with some leverage over its actions, there is absolutely zero reason to believe that any abuses would be stopped under a system without any judicial oversight. The Executive Branch cannot police itself.
Finally, there’s the question of what happens at DHS. No matter what you think about NSA’s minimization procedures (and they do have flaws), they do ensure that data that comes in through NSA doesn’t get broadly circulated in a way that identifies US persons. The IC has increasingly bypassed this control since 2007 by putting FBI at the front of data collection, which means data can be shared broadly even outside of the government. But FISC never permitted the IC to do this with upstream collection. So any content (metadata was different) on US persons collected under upstream collection would be subjected to minimization procedures.
This CISA model eliminates that control too. After all, CISA, as written, would let FBI and NSA veto any scrub (including of content) at DHS. And incoming data (again, probably including content) would be shared immediately not only with FBI (which has been the vehicle for sharing NSA data broadly) but also Treasury and ODNI, which are both veritable black holes from a due process perspective. And what few protections for US persons are tied to a relevance standard that would be met by virtue of a tie to that selector. Thus, CISA would permit the immediate sharing, with virtually no minimization, of US person content across the government (and from there to private sector and local governments).
I welcome corrections to this model — I presume I’ve overstated how much of an improvement over FISA this program would be. But if this analysis is correct, then CISA would give the IC everything that would have wanted for a cybersecurity certificate under Section 702, with none of the inadequate limits that would have had and may in fact have. CISA would provide an administrative way to spy on US person (domestic) content all without any judicial overview.
All of which brings me back to why the IC wants this this much. In at least one case, the IC did manage to use a combination of upstream and PRISM collection to stop an attempt to steal large amounts of data from a defense contractor. That doesn’t mean it’ll be able to do it at scale, but if by offering various kinds of immunity it can get all backbone providers to play along, it might be able to improve on that performance.
But CISA isn’t so much a cybersecurity bill as it is an Internet domestic spying bill, with permission to spy on a range of nefarious activities in cyberspace, including kiddie porn and IP theft. This bill, because it permits the spying on US person content, may be far more useful for that purpose than preventing actual hacks. That is, it won’t fix the hacking problem (it may make it worse by gutting Federal authority to regulate corporate cyber hygiene). But it will help police other kinds of activity.
If I’m right, the IC’s insistence it needs CISA — in the name of, but not necessarily intending to accomplish — cybersecurity makes more sense.
Update: This post has been tweaked for clarity.
Update, November 5: I should have written this post before I wrote this one. In it, I point to language in the August 26, 2014 Thomas Hogan opinion reflecting earlier approval, at least in the FBI minimization procedures, to share cyber signatures with private entities. The first approval was on September 20, 2012. The FISC approved the version still active in 2014 on August 30, 2013. (See footnote 19.) That certainly suggests FISC approved cyber sharing more broadly than the 2011 opinion might have suggested, though I suspect it still included more restrictions than CISA would. Moreover, if the language only got approved for the FBI minimization procedures, it would apply just to PRISM production, given that the FBI does not (or at least didn’t used to) get unminimized upstream production.
A key change — one Burr and Feinstein have highlighted in their comments on the floor — is the integration of DHS even more centrally in the process of the data intake process. Just as one example, the MA adds the Secretary of Homeland Security to the process of setting up the procedures about information sharing.
Not later than 60 days after the date of the enactment of this Act, the Attorney General and the Secretary of Homeland Security shall, in coordination with the heads of the appropriate Federal entities, develop and submit to Congress interim policies and procedures relating to the receipt of cyber threat indicators and defensive measures by the Federal Government. [my emphasis]
That change is applied throughout.
But there’s one area where adding more DHS involvement appears to be just a show: where it permits DHS conduct a scrub of the data on intake (as Feinstein described, this was an attempt to integrate Tom Carper’s and Chris Coons’ amendments doing just that).
This is also an issue DHS raised in response to Al Franken’s concerns about how CISA would affect their current intake procedure.
To require sharing in “real time” and “not subject to any delay [or] modification” raises concerns relating to operational analysis and privacy.
First, it is important for the NCCIC to be able to apply a privacy scrub to incoming data, to ensure that personally identifiable information unrelated to a cyber threat has not been included. If DHS distributes information that is not scrubbed for privacy concerns, DHS would fail to mitigate and in fact would contribute to the compromise of personally identifiable information by spreading it further. While DHS aims to conduct a privacy scrub quickly so that data can be shared in close to real time, the language as currently written would complicate efforts to do so. DHS needs to apply business rules, workflows and data labeling (potentially masking data depending on the receiver) to avoid this problem.
Second, customers may receive more information than they are capable of handling, and are likely to receive large amounts of unnecessary information. If there is no layer of screening for accuracy, DHS’ customers may receive large amounts of information with dubious value, and may not have the capability to meaningfully digest that information.
While the current Cybersecurity Information Sharing Act recognizes the need for policies and procedures governing automatic information sharing, those policies and procedures would not effectively mitigate these issues if the requirement to share “not subject to any delay [or] modification” remains.
To ensure automated information sharing works in practice, DHS recommends requiring cyber threat information received by DHS to be provided to other federal agencies in “as close to real time as practicable” and “in accordance with applicable policies and procedures.”
Effectively, DHS explained that if it was required to share data in real time, it would be unable to scrub out unnecessary and potentially burdensome data, and suggested that the “real time” requirement be changed to “as close to real time as practicable.”
But compare DHS’s concerns with the actual language added to the description of the information-sharing portal (the new language is in italics).
(3) REQUIREMENTS CONCERNING POLICIES AND PROCEDURES.—Consistent with the guidelines required by subsection (b), the policies and procedures developed and promulgated under this subsection shall—
(A) ensure that cyber threat indicators shared with the Federal Government by any entity pursuant to section 104(c) through the real-time process described in subsection (c) of this section—
(i) are shared in an automated manner with all of the appropriate Federal entities;
(ii) are only subject to a delay, modification, or other action due to controls established for such real-time process that could impede real-time receipt by all of the appropriate Federal entities when the delay, modification, or other action is due to controls—
(I) agreed upon unanimously by all of the heads of the appropriate Federal entities;
(II) carried out before any of the appropriate Federal entities retains or uses the cyber threat indicators or defensive measures; and
(III) uniformly applied such that each of the appropriate Federal entities is subject to the same delay, modification, or other action; and
This section permits one of the “appropriate Federal agencies” to veto such a scrub. Presumably, the language only exists in the bill because one of the “appropriate Federal agencies” has already vetoed the scrub. NSA (in the guise of “appropriate Federal agency” DOD) would be the one that would scare people, but such a veto would equally as likely to come from FBI (in the guise of “appropriate Federal agency” DOJ), and given Tom Cotton’s efforts to send this data even more quickly to FBI, that’s probably who vetoed it.
If you had any doubts the Intelligence Community is ordering up what it wants in this bill, the language permitting them a veto on privacy protections should alleviate you of those doubts.
On top of NSA and FBI’s veto authority, there’s an intentional logical problem here. DHS is one of the “appropriate Federal agencies,” but DHS is the entity that would presumably do the scrub. Yet if it can’t retain data before any other agency, it’s not clear how it could do a scrub.
In short, this seems designed to lead people to believe there might be a scrub (or rather, that under CISA, DHS would continue to do the privacy scrub they are currently doing, though they are just beginning to do it automatically) when, for several reasons, that also seems to be ruled out by the bill. And ruled out because one “appropriate Federal agency” (like I said, I suspect FBI) plans to veto such a plan.
So it has taken this Manager’s Amendment to explain why we need CISA: to make sure that DHS doesn’t do the privacy scrubs it is currently doing.
I’ll explain in a follow-up post why it would be so important to eliminate DHS’ current scrub on incoming data.