I got distracted reading two pieces this morning. This great Andrew O’Hehir piece, on how those attacking Edward Snowden and Glenn Greenwald ought to consider the lesson of Justice Louis Brandeis’ dissent in Olmstead.
In the famous wiretapping case Olmstead v. United States, argued before the Supreme Court in 1928, Justice Louis Brandeis wrote one of the most influential dissenting opinionsin the history of American jurisprudence. Those who are currently engaged in what might be called the Establishment counterattack against Glenn Greenwald and Edward Snowden,including the eminent liberal journalists Michael Kinsley and George Packer, might benefit from giving it a close reading and a good, long think.
Brandeis’ understanding of the problems posed by a government that could spy on its own citizens without any practical limits was so far-sighted as to seem uncanny. (We’ll get to that.) But it was his conclusion that produced a flight of memorable rhetoric from one of the most eloquent stylists ever to sit on the federal bench. Government and its officers, Brandeis argued, must be held to the same rules and laws that command individual citizens. Once you start making special rules for the rulers and their police – for instance, the near-total impunity and thick scrim of secrecy behind which government espionage has operated for more than 60 years – you undermine the rule of law and the principles of democracy.
“Our Government is the potent, the omnipresent teacher,” Brandeis concluded. “For good or for ill, it teaches the whole people by its example. Crime is contagious. If the Government becomes a lawbreaker, it breeds contempt for law; it invites every man to become a law unto himself; it invites anarchy. To declare that in the administration of the criminal law the end justifies the means — to declare that the Government may commit crimes in order to secure the conviction of a private criminal — would bring terrible retribution.”
And this more problematic Eben Moglen piece talking about how Snowden revealed a threat to democracy we must now respond to.
So [Snowden] did what it takes great courage to do in the presence of what you believe to be radical injustice. He wasn’t first, he won’t be last, but he sacrificed his life as he knew it to tell us things we needed to know. Snowden committed espionage on behalf of the human race. He knew the price, he knew the reason. But as he said, only the American people could decide, by their response, whether sacrificing his life was worth it.
So our most important effort is to understand the message: to understand its context, purpose, and meaning, and to experience the consequences of having received the communication.
Even once we have understood, it will be difficult to judge Snowden, because there is always much to say on both sides when someone is greatly right too soon.
I raise them in tandem here because both address the threat of spying to something called democracy. And the second piece raises it amid the context of American Empire (he compares the US to the Roman decline into slavery).
I raise them here for two reasons.
First, because neither directly notes that Snowden claimed he leaked the documents to give us a choice, the “chance to determine if it should change itself.”
“For me, in terms of personal satisfaction, the mission’s already accomplished,” he said. “I already won. As soon as the journalists were able to work, everything that I had been trying to do was validated. Because, remember, I didn’t want to change society. I wanted to give society a chance to determine if it should change itself.”
“All I wanted was for the public to be able to have a say in how they are governed,” he said. “That is a milestone we left a long time ago. Right now, all we are looking at are stretch goals.”
Snowden, at least, claims to have contemplated the possibility that, given a choice, we won’t change how we’re governed.
And neither O’Hehir nor Moglen contemplates the state we’re currently in, in which what we call democracy is choosing to expand surveillance in response to Snowden’s disclosures.
Admittedly, the response to Snowden is not limited to HR 3361. I have long thought a more effective response might (or might not!) be found in courts — that if, if the legal process does not get pre-empted by legislation. I have long thought the pressure on Internet companies would be one of the most powerful engines of change, not our failed democratic process.
But as far as Congress is concerned, our stunted legislative process has started down the road of expanding surveillance in response to Edward Snowden.
And that’s where I find Moglen useful but also problematic.
He notes that the surveillance before us is not just part of domestic control (indeed, he actually pays less attention to the victims of domestic surveillance than I might have, but his is ultimately a technical argument), but also of Empire.
While I don’t think it’s the primary reason driving the democratic response to Snowden to increase surveillance (I think that also stems from the Deep State’s power and the influence of money on Congress, though many of the surveillance supporters in Congress are also supporting a certain model of US power), I think far too many people act on surveillance out of either explicit or implicit beliefs about the role of US hegemony.
There are some very rational self-interested reasons for Americans to embrace surveillance.
For the average American, there’s the pride that comes from living in the most powerful country in history, all the more so now that that power is under attack, and perhaps the belief that “Us” have a duty to take it to “Them” who currently threaten our power. And while most won’t acknowledge it, even the declining American standard of living still relies on our position atop the world power structure. We get cheap goods because America is the hegemonic power.
To the extent that spying on the rest of the world serves to shore up our hegemonic position then, the average American might well have reason to embrace the spying, because it keeps them in flat screen TVs.
But that privilege is just enjoyed by some in America. Moglen, tellingly, talks a lot about slavery but says nothing about Jim Crow or the other instruments of domestic oppression that have long used authoritarian measures against targeted populations to protect white male power. American history looked at not against the history of a slavery that is past, but rather against the continuity of history in which some people — usually poor and brown and/or female — don’t participate in the American “liberty” and “privacy” Moglen celebrates, our spying on the rest of the world is more of the same, a difference in reach but not in kind. Our war on drugs and war on terror spying domestically is of a piece with our dragnet internationally, if thus far more circumscribed by law (but that law is expanding and that will serve existing structures of power!).
But there’s another reason Americans — those of the Michael Kinsley and George Packer class — might embrace surveillance. That’s the notion that American hegemony is, for all its warts, the least bad power out there. I suspect Kinsley and (to a lesser extent) Packer would go further, saying that American power is affirmatively good for the rest of the world. And so we must use whatever it takes to sustain that power.
It sounds stupid when I say it that way. I’m definitely oversimplifying the thought process involved. Still, it is a good faith claim: that if the US curtails its omnipresent dragnet and China instead becomes the dominant world power (or, just as likely, global order will dissolve into chaos), we’ll all be worse off.
I do think there’s something to this belief, though it suppresses the other alternative — that the US could use this moment to improve the basis from which US exercises its hegemony rather than accept the increasingly coercive exercise of our power — or better yet use the twilight of our hegemony to embrace something more fair (and also something more likely to adequately respond to the global threat of climate change). But I do believe those who claim US hegemony serves the rest of the world believe it fairly uncritically.
One more thing. Those who believe that American power is affirmatively benign power may be inclined to think the old ways of ensuring that power — which includes a docile press — are justified. As much as journalism embraced an adversarial self-image after Watergate, the fundamentally complicit role of journalism really didn’t change for most. Thus, there remains a culture of journalism in which it was justified to tell stories to the American people — and the rest of the world — to sustain American power.
One of those stories, for example, is the narrative of freedom that Moglen embraces.
That is, for those who believe it is worth doing whatever it takes to sustain the purportedly benign American hegemon, it would be consistent to also believe that journalists must also do whatever it takes to sustain purportedly benign system of (white male) power domestically, which we call democracy but which doesn’t actually serve the needs of average Americans.
And for better or worse, those who embrace that power structure, either domestically and/or internationally, expanding surveillance is rational, so long as you ignore the collateral damage.
Update: Tempered critique of Packer because I agree he’s not embracing this journalist as narrative teller as much.
DOJ just announced the indictment of 5 Chinese People’s Liberation Army hackers (complete with Most Wanted posters) for breaking into a bunch of companies — and the United Steel Workers — in Pittsburgh.
I’ll have more to say about the indictment later, but for now there are two parts of it I find to be particularly interesting.
The indictment was brought by the US Attorney for Western PA, David Hickton, not EDVA (the Defense Industry) or SDNY (Wall Street) where the US complains more loudly about hacking. The victims include Pittsburgh’s most important companies — US Steel, Westinghouse, and Alcoa. After watching the presser, I would be shocked if Hickton is not planning on running for higher office in Western PA.
But there’s another detail about Western PA that may be of interest. In addition to these blue chip industrial companies, Pittsburgh is also home to Carnegie Mellon’s CERT, a public-private venture on fighting cyberthreats. That is, I suspect this indictment came out of Pittsburgh because it has the facilities to investigate such crimes.
But the other interesting aspect of this indictment coming out of Pittsburgh is that — at least judging from the charged crimes — there is far less of the straight out IP theft we always complain about with China.
In fact, much of the charged activity involves stealing information about trade disputes — the same thing NSA engages in all the time. Here are the charged crimes committed against US Steel and the United Steelworkers, for example.
In 2010, U.S. Steel was participating in trade cases with Chinese steel companies, including one particular state-owned enterprise (SOE-2). Shortly before the scheduled release of a preliminary determination in one such litigation, Sun sent spearphishing e-mails to U.S. Steel employees, some of whom were in a division associated with the litigation. Some of these e-mails resulted in the installation of malware on U.S. Steel computers. Three days later, Wang stole hostnames and descriptions of U.S. Steel computers (including those that controlled physical access to company facilities and mobile device access to company networks). Wang thereafter took steps to identify and exploit vulnerable servers on that list.
In 2012, USW was involved in public disputes over Chinese trade practices in at least two industries. At or about the time USW issued public statements regarding those trade disputes and related legislative proposals, Wen stole e-mails from senior USW employees containing sensitive, non-public, and deliberative information about USW strategies, including strategies related to pending trade disputes. USW’s computers continued to beacon to the conspiracy’s infrastructure until at least early 2013.
This is solidly within the ambit of what NSA does in other countries. (Recall, for example, how we partnered with the Australians to obtain information to help us in a clove cigarette trade dispute.)
I in no way mean to minimize the impact of this spying on USS and USW. I also suspect they were targeted because the two organizations partner together on an increasingly successful manufacturing organization. Which would still constitute a fair spying target, but also one against which China has acute interests.
But that still doesn’t make it different from what the US does when it engages in spearphishing — or worse — to steal information to help us in trade negotiations or disputes.
We’ve just criminalized something the NSA does all the time.
Update: Adding, one other reason they’re probably bringing this indictment with industrials as victims is because their information is not as sensitive as Defense Contractor or Wall Street victims is.
Update: These guys are named in Mandiant’s most recent report on China’s hacking. So that’s a lot of what they used for the indictment, presumably. But they indicted with companies that aren’t as sensitive as some of Mandiant’s other clients.
Update: Correction: only Wang Dong was on Mandiant’s list, meaning 2 of their ID’ed people were not indicted.
The New Yorker has a weird interview with Keith Alexander. The weirdness stems from Alexander’s wandering answers, which may, in turn, stem from the fact that the interview was not done by an NSA beat reporter. Such interviews seem to flummox NSA insiders.
But beyond all the rambling about Jeopardy and “free vowels” and disingenuous claims (and silences) about past terrorist events, ultimately Keith Alexander wants us to know that we are at greater risk as he steps down after more than 8 years of protecting us.
His logic for that is not that terrorists struck the Boston Marathon last year, in spite of NSA apparently collecting on them but not reviewing the collection — he doesn’t even mention that.
Rather, it’s that the number of terrorist attacks are going up globally. The US has thus far avoided such attacks (ignoring hate crimes and the Marathon attack), which he points to as proof our spying is working. But he also points to it as proof that we’re due.
There are people on one side saying that these N.S.A. programs could have stopped these plots. And then there are people who dispute that.
We know we didn’t stop 9/11. People were trying, but they didn’t have the tools. This tool, we believed, would help them. Let’s look at what’s happening right now. You ought to get this from the START Program at the University of Maryland. They have the statistics on terrorist attacks. 2012 and 2013. The number of terrorist attacks in 2012—do you know how many there were globally?
Six thousand seven hundred and seventy-one. Over ten thousand people killed. In 2013, it would grow to over ten thousand terrorist attacks and over twenty thousand people killed. Now, how did we do in the United States and Europe? How do you feel here? Safe, right? I feel pretty safe.
So think about how secure our nation has been since 9/11. We take great pride in it. It’s not because of me. It’s because of those people who are working, not just at N.S.A. but in the rest of the intelligence community, the military, and law enforcement, all to keep this country safe. But they have to have tools. With the number of attacks that are coming, the probability, it’s growing—
I’m sorry, could you say that once more?
The probability of an attack getting through to the United States, just based on the sheer numbers, from 2012 to 2013, that I gave you—look at the statistics. If you go from just eleven thousand to twenty thousand, what does that tell you? That’s more. That’s fair, right?
I don’t know. I think it depends what the twenty thousand—
—deaths. People killed. From terrorist attacks. These aren’t my stats. The University of Maryland does it for the State Department.
I’ll look at them. I will. So you’re saying that the probability of an attack is growing.
The probability is growing. What I saw at N.S.A. is that there is a lot more coming our way. Just as someone is revealing all the tools and the capabilities we have. What that tells me is we’re at greater risk. I can’t measure it. You can’t say, Well, is that enough to get through? I don’t know. It means that the intel community, the military community, and law enforcement are going to work harder.
Since Alexander invited us, let’s see what the START data say, shall we? Here’s what they tell us:
According to the annex, the 10 countries that experienced the most terrorist attacks in 2013 are the same as those that experience the most terrorist attacks in 2012.
Although terrorist attacks occurred in 93 different countries, they were heavily concentrated geographically. More than half of all attacks (57%), fatalities (66%), and injuries (73%) occurred in Iraq, Pakistan and Afghanistan. By wide margin, the highest number of fatalities (6,378), attacks (2,495) and injuries (14,956) took place in Iraq. The average lethality of attacks in Iraq was 40 percent higher than the global average and 33 percent higher than the 2012 average in Iraq.
The US hasn’t been attacked. But attacks are mushrooming in Iraq, Pakistan, and Afghanistan. These not only happen to be places where we’ve been fighting the war on terror the longest and most directly, places where Alexander has been at the forefront of the fight, even before he took over at NSA. But they also happen to be those places overseas that the NSA uses to legitimize their global reach.
Yet 13 or 11 years of concentrated spying — of collect it all — in those places has not eliminated terrorism. On the contrary, terrorism is now getting worse.
And now they serve as both the proof that spying is working and that spying is more necessary than ever.
Rather than evidence that the War on Terror is failing.
We shouldn’t be surprised that we’re losing a war fighting which Alexander was one of the longest tenured generals (though I don’t think he bears primary responsibility for the policy decisions that have led to this state). After all, last year, Alexander said that also under his watch, we had been plundered like a colony via cyberattacks. He seems to think he lost both the war on terror and on cyberattacks.
Which, if you’re invested in Wall Street, ought to alarm you. Because that’s where Keith Alexander is headed to wage war next.
Man, I knew Keith Alexander was going to cash in after he retired. And I probably would have placed all my chips on him profiting off his cyber fearmongering.
Former National Security Agency chief Gen. Keith Alexander is launching a consulting firm for financial institutions looking to address cybersecurity threats, POLITICO has learned.
Less than two months since his retirement from the embattled agency at the center of the Edward Snowden leak storm, the retired four-star general is setting up a Washington-based operation that will try to attract clients based on his four decades of experience in the military and intelligence — and the continued levels of access to senior decision-makers that affords.
But the part of this story that even I couldn’t have predicted — but makes so much sense it brings tears to my eyes — is that he’s shacking up with Promontory Financial Group, the revolving door regulator to hire that has been caught underestimating its clients’ crimes for big money.
Alexander will lease office space from the global consulting firm Promontory Financial Group, which confirmed in a statement on Thursday that it plans to partner with him on cybersecurity matters.
“He and a firm he’s forming will work on the technical aspects of these issues, and we on the risk-management compliance and governance elements,” said Promontory spokesman Chris Winans.
I’m impressed, Lying Keith: You’ve done my very low expectations even one better!
The White House Cybersecurity Coordinator, Michael Daniel, has a post purporting to lay out “established principles” on when the Administration would and would not disclose software and hardware vulnerabilities.
I’ve got a more thorough read below the rule, but I want to focus on one particular line. Daniel describes the downside of disclosing vulnerabilities as losing intelligence.
Disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence that could thwart a terrorist attack [sic] stop the theft of our nation’s intellectual property, or even discover more dangerous vulnerabilities that are being used by hackers or other adversaries to exploit our networks.
That is, Daniel lays out three threats — terrorism, “hackers or other adversaries,” and IP thieves — that require we use vulnerabilities to combat.
The inclusion of terrorism is not a surprise. That’s the excuse NSA has been using since last June to justify its work.
Cybersecurity (“hackers or other [presumably far more threatening] adversaries”) is the threat that NSA was focused on until such time as it needed to chant terror terror terror to get people to buy into the dragnet. Not only is it not a surprise, but it’s probably the most urgent reason to use vulnerabilities (even if the threat in question is really far more serious than hackers).
But IP thieves?
To be fair, by this Daniel may be meaning Lockheed-Martin’s intellectual property, by which he really means that intellectual property that we fetishize as private property but is really national security. (I’ve got a question in with the White House on this point.) But stated as he does, it could as easily mean Monsanto and Pfizer and even Disney.
In fact, he may well mean that. As I noted, in its original statement, the Administration made quite clear they would use Zero Days for law enforcement as well as national security purposes. Moreover, as I have also noted, NSA rewrote the legally mandated minimization standards in its secret procedures to equate threats to property with threats to life and body, thereby permitting itself to keep data that reveals threats to property that are not otherwise evidence of crime indefinitely (with DIRNSA approval).
And all that’s assuming only NSA will exploit Zero Days. There’s no reason to assume that the FBI (and other law enforcement agencies, including DEA) aren’t using them.
I’m not sure that’s a bad thing either. Several great security experts recently endorsed using hacks for law enforcement, though insisted that overall security must not be compromised.
That’s the point though: how low is the bar for exploiting vulnerabilities? And if they are going to be used for law enforcement purposes — to chase IP thieves rather than threats to our nation — why isn’t it more public? Continue reading
I have a piece over at The Week on the unusually credible denial the government issued on Friday, claiming they did not know of the Heartbleed vulnerability until earlier this month. In it, I note that Obama adopted a much lower bar for using software vulnerability than his hand-picked Review Group recommended in December. Most troubling, Obama admits he will use exploits for law enforcement, in addition to national security.
But the announcement’s discussion of the interagency review also made clear that the process will, sometimes, approve such a use — which means that the next Heartbleed could be exploited by the NSA. Furthermore, the standard the administration claims to have adopted — “a clear national security or law enforcement need” (italics mine) — is lower than the “urgent and significant national security priority” recommended by the Review Group.
In other words, in very clear language, the government has confessed that it does and will continue to keep secret Heartbleed-style vulnerabilities not just for national security purposes, but also for mere law enforcement.
The idea that the government might hack in the name of law enforcement is not new.
As WSJ reported last month, DOJ is trying to get the Judicial Conference to approve language allowing it to get warrants to hack in multiple districts at once.
The government’s push for rule changes sheds light on law enforcement’s use of remote hacking techniques, which are being deployed more frequently but have been protected behind a veil of secrecy for years.
In documents submitted by the government to the judicial system’s rule-making body this year, the government discussed using software to find suspected child pornographers who visited a U.S. site and concealed their identity using a strong anonymization tool called Tor.
The government’s hacking tools—such as sending an email embedded with code that installs spying software — resemble those used by criminal hackers. The government doesn’t describe these methods as hacking, preferring instead to use terms like “remote access” and “network investigative techniques.”
Right now, investigators who want to search property, including computers, generally need to get a warrant from a judge in the district where the property is located, according to federal court rules.
In a computer investigation, that might not be possible, because criminals can hide behind anonymizing technologies. In cases involving botnets—groups of hijacked computers—investigators might also want to search many machines at once without getting that many warrants.
Some judges have already granted warrants in cases when authorities don’t know where the machine is. But at least one judge has denied an application in part because of the current rules. The department also wants warrants to be allowed for multiple computers at the same time, as well as for searches of many related storage, email and social media accounts at once, as long as those accounts are accessed by the computer being searched.
I especially applaud the way WSJ highlighted DOJ’s complaints about Orin Kerr calling what they do hacking.
Even more timely, a team of computer security experts — Steve Bellovin, Matt Blaze, Sandy Clark, and Susan Landau — just published a paper arguing that legal hacking is a better means to conduct law enforcement collection than a CALEA-type solution. But they argue that the government can and must achieve this law enforcement objective without compromising the security of the network.
¶162 As we alluded to earlier, this is a clash of competing social goods between the security obtained by patching as quickly as possible and the security obtained by downloading the exploit to enable the wiretap to convict the criminal. Although there are no easy answers, we believe the answer is clear. In a world of great cybersecurity risk, where each day brings a new headline of the potential for attacks on critical infrastructure,239 where the Deputy Secretary of Defense says that thefts of intellectual property “may be the most significant cyberthreat that the United States will face over the long term,”240 public safety and national security are too critical to take risks and leave vulnerabilities unreported and unpatched. We believe that law enforcement should always err on the side of caution in deciding whether to refrain from informing a vendor of a vulnerability. Any policy short of full and immediate reporting is simply inadequate. “Report immediately” is the policy that any crime-prevention agency should have, even though such an approach will occasionally hamper an investigation.241
¶163 Note that a report immediately policy does not foreclose exploitation of the reported vulnerability by law enforcement. Vulnerabilities reported to vendors do not result in immediate patches; the time to patch varies with each vendor’s patch release schedule (once per month, or once every six weeks is common), but, since vendors often delay patches,242 the lifetime of a vulnerability is often much longer. Research shows that the average lifetime of a zero-day exploit is 312 days.243 Furthermore, users frequently do not patch their systems promptly, even when critical updates are available.24
¶164 Immediate reporting to the vendor of vulnerabilities considered critical will result in a shortened lifetime for particular operationalized exploits, but it will not prevent the use of operationalized exploits. Instead, it will create a situation in which law enforcement is both performing criminal investigations using the wiretaps enabled through the exploits, and crime prevention through reporting the exploits to the vendor. This is clearly a win/win situation.
¶166 The tension between exploitation and reporting can be resolved if the government follows both paths, actively reporting and working to fix even those vulnerabilities that it uses to support wiretaps. As we noted, the reporting of vulnerabilities (to vendors and/or to the public) does not preclude exploiting them.247 Once a vulnerability is reported, there is always a lead time before a “patch” can be engineered, and a further lead time before this patch is deployed to and installed by future wiretap targets. Because there is an effectively infinite supply of vulnerabilities in software platforms,248 provided new vulnerabilities are found at a rate that exceeds the rate at which they are repaired, reporting vulnerabilities need not compromise the government’s ability to conduct exploits. By always reporting, the government investigative mission is not placed in conflict with its crime prevention mission. In fact, such a policy has the almost paradoxical affect that the more active the law enforcement exploitation activity becomes, the more zero-day vulnerabilities are reported to and repaired by vendors.
They go on to propose a legal regime that can provide clear guidance on which vulnerabilities should be reported, even analogizing the emergency period in which an agency can wiretap before getting a warrant.
But here’s the thing: NSA’s Bull Run program got reported in September, and since then the government has remained coy about whether it uses or even seeds vulnerabilities in software, even though anyone paying attention knew it does. It took claims that the government had been using the Heartbleed vulnerability for two years for the Administration to admit, tacitly, the earlier reports were correct.
The kind of legal regime Bellovin et al recommend requires that this law enforcement function operate within a legal — and therefore publicly acknowledged — framework, rather than piggy backing on the NSA’s executive authorities in secret.
While Friday’s admission is a start, and while it may be true that hacking presents a better solution to law enforcement needs than CALEA, these questions need to be openly discussed.
Otherwise, DOJ not only is hacking — in the dictionary definition Orin Kerr applied — but hacking in the reckless manner that DOJ prosecutes.
Yesterday, I noted that ODNI is withholding a supplemental opinion approved on August 20, 2008 that almost certainly approved the tracking of “correlations” among the phone dragnet (though this surely extends to the Internet dragnet as well).
I pointed out that documents released by Edward Snowden suggest the use of correlations extends well beyond the search for “burner” phones.
At almost precisely the same time, Snowden was testifying to the EU. The first question he answered served to clarify what “fingerprints” are and how XKeyscore uses them to track a range of innocent activities. (This starts after 11:16, transcription mine.)
It has been reported that the NSA’s XKeyscore for interacting with the raw signals intercepted by mass surveillance programs allow for the creation of something that is called “fingerprints.”
I’d like to explain what that really means. The answer will be somewhat technical for a parliamentary setting, but these fingerprints can be used to construct a kind of unique signature for any individual or group’s communications which are often comprised of a collection of “selectors” such as email addresses, phone numbers, or user names.
This allows State Security Bureaus to instantly identify the movements and activities of you, your computers, or other devices, your personal Internet accounts, or even key words or other uncommon strings that indicate an individual or group, out of all the communications they intercept in the world are associated with that particular communication. Much like a fingerprint that you would leave on a handle of your door or your steering wheel for your car and so on.
However, though that has been reported, that is the smallest part of the NSA’s fingerprinting capability. You must first understand that any kind of Internet traffic that passes before these mass surveillance sensors can be analyzed in a protocol agnostic manner — metadata and content, both. And it can be today, right now, searched not only with very little effort, via a complex regular expression, which is a type of shorthand programming. But also via any algorithm an analyst can implement in popular high level programming languages. Now, this is very common for technicians. It not a significant work load, it’s quite easy.
This provides a capability for analysts to do things like associate unique identifiers assigned to untargeted individuals via unencrypted commercial advertising networks through cookies or other trackers — common tracking means used by businesses everyday on the Internet — with personal details, such as individuals’ precise identity, personal identity, their geographic location, their political affiliations, their place of work, their computer operating system and other technical details, their sexual orientation, their personal interests, and so on and so forth. There are very few practical limitations to the kind of analysis that can be technically performed in this manner, short of the actual imagination of the analysts themselves.
And this kind of complex analysis is in fact performed today using these systems. I can say, with authority, that the US government’s claim that “keyword filters,” searches, or “about” analysis, had not been performed by its intelligence agencies are, in fact, false. I know this because I have personally executed such searches with the explicit authorization of US government officials. And I can personally attest that these kind of searches may scrutinize communications of both American and European Union citizens without involvement of any judicial warrants or other prior legal review.
What this means in non-technical terms, more generally, is that I, an analyst working at NSA, or, more concerningly, an analyst working for a more authoritarian government elsewhere, can without the issue of any warrant, create an algorithm that for any given time period, with or without human involvement, sets aside the communications of not only targeted individuals, but even a class of individual, and that just indications of an activity — or even just indications of an activity that I as the analyst don’t approve of — something that I consider to be nefarious, or to indicate nefarious thoughts, or pre-criminal activity, even if there’s no evidence or indication that’s in fact what’s happening. that it’s not innocent behavior. Continue reading
In the NYT, David Sanger describes US efforts to develop some common understanding over cyberattacks with China by briefing it on what our escalation process would be. Unsurprisingly, China (which hasn’t had a massive data leak as an excuse to admit to information now in the public domain) has no reciprocated.
And while Sanger makes it clear the US is still not admitting to StuxNet, his US sources are coming to understand that the rationalizations we use to excuse our spying aren’t really as meaningful as we like to tell ourselves.
Mr. Obama told the Chinese president that the United States, unlike China, did not use its technological powers to steal corporate data and give it to its own companies; its spying, one of Mr. Obama’s aides later told reporters, is solely for “national security priorities.” But to the Chinese, for whom national and economic security are one, that argument carries little weight.
“We clearly don’t occupy the moral high ground that we once thought we did,” said one senior administration official.
I especially love the spectacle of an SAO coming to grips with this, but doing so anonymously.
Yet this anonymous admission will not stop the US from imposing such double standards. On Friday, the US Trade Representative issued its yearly report on barriers to trade in telecom and related industries. (Reuters reported on the report here.) None of these complaints are explicitly about the NSA. And some of USTR’s demands — that Turkey stop shutting down services like Twitter — would make it harder for other countries to spy on their own citizens.
But many of the USTR’s complaints single out measures that are either deliberately meant to undermine NSA’s spying advantages, or would have the effect of doing so. So these complaints also amount to whining that other countries are making NSA’s job harder.
Consider some of the complaints against China, whose top equipment manufacturer Huawei the US has excluded from not only the US, but also Korea and Australia.
It complains about China’s limits on telecom providers — and pretends this is exclusively a trade issue, not a national security issue.
Moreover, the Chinese Government still owns and controls the three major basic telecom operators in the telecommunications industry, and appears to see these entities as important tools in broader industrial policy goals, such as promoting indigenous standards for network equipment.
USTR criticizes China’s categorization of business that can be used for spying — such as cloud computing firms — as a telecoms subject to licensing restrictions.
China’s equity restrictions on foreign participation constitute a major impediment to market access in China. These restrictions are compounded by China’s broad interpretation of services requiring a telecommunications license (and thus subject to equity caps) and narrow interpretation of the specific services foreign firms can offer in these sub-sectors.
Several VAS definitions in the draft Catalog also raise trade restriction concerns. First, the draft Catalog created a new category of “Internet Resource Collaboration Services” that appears to covers all aspects of cloud computing. (Cloud computing is a computer service or software delivery model, and should not be misclassified as a telecommunications service.) MIIT approach to cloud computing generally raises a host of broad concerns. Second, the draft Catalog significantly expanded the definition of “Information Services” to include software application stores, software delivery platforms, social networking websites, blogs, podcasts, computer security products, and a number of other Internet and computing services. These services simply use the Internet as a platform for providing business and information to customers, and thus should not be considered as telecommunications services.
USTR complains about Chinese requirements for encryption both for information systems tied to critical infrastructure.
Starting in 2012, both bilaterally and during meetings of the WTO’s Committee on Technical Barriers to Trade, the United States raised its concerns with China about framework regulations for information security in critical infrastructure known as the Multi-Level Protection Scheme (MLPS), first issued in June 2007 by the Ministry of Public Security (MPS) and the Ministry of Industry and Information Technology (MIIT). The MLPS regulations put in place guidelines to categorize information systems according to the extent of damage a breach in the system could pose to social order, public interest, and national security. The MLPS regulations also appear to require buyers to comply with certain information security technical regulations and encryption regulations that are referenced within the MLPS regulations. If China issues implementing rules for the MLPS regulations and applies the rules broadly to commercial sector networks and IT infrastructure, they could adversely affect sales by U.S. information security technology providers in China.
And for providers on its 4G network.
At the end of 2011 and into 2012, China released a Chinese government-developed 4G Long-Term Evolution (LTE) encryption algorithm known as the ZUC standard. The European Telecommunication Standards Institute (ETSI) 3rd Generation Partnership Project (3GPP) had approved ZUC as a voluntary LTE encryption standard in September 2011. According to U.S. industry reports, MIIT, in concert with the State Encryption Management Bureau (SEMB), informally announced in early 2012 that only domestically developed encryption algorithms, such as ZUC, would be allowed for the network equipment and mobile devices comprising 4G TD-LTE networks in China. It also appeared that burdensome and invasive testing procedures threatening companies’ sensitive intellectual property could be required.
In response to U.S. industry concerns, USTR urged China not to mandate any particular encryption standard for 4G LTE telecommunications equipment, in line with its bilateral commitments and the global practice of allowing commercial telecommunications services providers to work with equipment vendors to determine which security standards to incorporate into their networks.
Finally, USTR dubs China’s limits on outsider VOIP services a trade restriction.
Restrictions on VoIP services imposed by certain countries, such as prohibiting VoIP services, requiring a VoIP provider to partner with a domestic supplier, or imposing onerous licensing requirements have the effect of restricting legitimate trade or creating a preference for local suppliers, typically former monopoly suppliers.
All of these complaints, of course, can be viewed narrowly as a trade problem. But the underlying motivation on China’s part is almost certainly about keeping the US out of its telecom networks, both to prevent spying and to sustain speech restraints behind the Great Firewall.
It’s not just China about which USTR complains. It issues similar dual purpose (trade and spying) complaints against India and Colombia, among others.
And of course, it finds European plans to require intra-EU transit limits — a plan done largely to combat US spying — a ‘draconian” trade restriction.
In particular, Deutsche Telekom AG (DTAG), Germany’s biggest phone company, is publicly advocating for EU-wide statutory requirements that electronic transmissions between EU residents stay within the territory of the EU, in the name of stronger privacy protection. Specifically, DTAG has called for statutory requirements that all data generated within the EU not be unnecessarily routed outside of the EU;
The United States and the EU share common interests in protecting their citizens’ privacy, but the draconian approach proposed by DTAG and others appears to be a means of providing protectionist advantage to EU-based ICT suppliers.
Meanwhile, even as I was writing this, one of the EU’s top Data Privacy figures, Paul Nemitz, just floated making the reverse accusation against America, that its NSA spying is a trade impediment to European businesses trying to do business in the US.
[Update at end of article.—Rayne 6:45 pm EST]
Between 1030 and 0400 UTC last night or early morning, most of Russia’s GLONASS satellites reported “illegal” or “failure” status. As of this post, they do not appear to be back online.
GLONASS is the equivalent of GPS, an alternative global navigation satellite system (GNSS) launched and operated by Russian Aerospace Defense Forces (RADF). Apart from GPS, it is the only other GNSS with global capability.
It’s possible that the outage is related to either a new M-class solar storm — the start of which was reported about 48 hours ago — or recent X-class solar flare on March 29 at approximately 1700 UTC. The latter event caused a short-term radio blackout about one hour after the flare erupted.
But there is conjecture that GLONASS’ outage is human in origin and possibly deliberate. The absence of any reported outage news regarding GPS and other active satellite systems suggests this is quite possible, given the unlikelihood that technology used in GLONASS differs dramatically from that used in other satellite systems.
At least one observer mentioned that a monitoring system tripped at 21:00 UTC — 00:00 GLONASS system time. The odds of a natural event like a solar storm tripping at exactly top of the hour are ridiculously slim, especially since radiation ejected from the new M-class storm may not reach its peak effect on earth for another 24-48 hours.
It’s not clear whether the new GLONASS-M satellite launched March 24th may factor into this situation. There are no English language reports indicating the new satellite was anything but successful upon its release, making it unlikely its integration into the GLONASS network caused today’s outage.
If the outage is based in human activity, the problem may have been caused by:
— an accidental disabling here on earth, though RADF most likely has redundancies to prevent such a large outage;
— deliberate tampering here on earth, though with RADF as operator this seems quite unlikely; or
— deliberate tampering in space, either through scripts sent from earth, or technology installed with inherent flaws.
The last is most likely, and of either scripts sent from earth or the flawed technology scenarios, the former is more likely to cause a widespread outage.
However, if many or all the core operating systems on board the GLONASS satellites had been updated within the last four years – after the discovery of Stuxnet in the wild – it’s not impossible that both hardware and software were compromised with an infection. Nor is it impossible that the same infection was triggered into aggressive action from earth.
Which begs the question: are we in the middle of a cyberwar in space?
UPDATE — 6:45 PM EST—
Sources report the GLONASS satellite network was back online noon-ish Russian time (UTC+4); the outage lasted approximately 11 hours. Unnamed source(s) said the outage was due to the upload of bad ephemeris data, the information used by the satellites to locate other satellites in space. An alleged system-wide update with bad data suggests RADF has serious problems with change management, though.
There is speculation the M-class solar storm, summarized at 1452 UTC as an “X-ray Event exceeded M5,” may have impacted GLONASS. However early feedback about radiation ejected by an M-class storm indicated the effects would not reach earth for 24-48 hours after the storm’s eruption.
This post is going to be a general review on the contents of the actual records collection part of the RuppRoge Fake Dragnet Fix, which starts on page 15, though I confess I’m particularly interested in what other uses — besides the phone dragnet — it will be put to.
First, note that this bill applies to “electronic communication service providers,” not telecoms. In addition, it uses neither the language of Toll Records from National Security Letters nor Dialing, Addressing, Routing, or Signalling from Pen Registers. Instead, it uses “records created as a result of communications of an individual or facility.” Also remember that FISC has, in the past, interpreted “facility” to mean “entire telecom switch.” This language might permit a lot of things, but I suspect that one of them is another attempt to end run content collection restrictions on Internet metadata — the same problem behind the hospital confrontation and the Internet dragnet shutdown in 2009. I look forward to legal analysis on whether this successfully provides an out.
The facility language is also troubling in association with the foreign power language of the bill (which already is a vast expansion beyond the terrorism-only targeting of the phone dragnet). Because you could have a telecom switch in contact with a suspected agent of a foreign power and still get a great deal of data, much of it on innocent people. The limitation (at b1B) to querying with “specific identifiers or selection terms’ then becomes far less meaningful.
Then add two details from section h, covering the directives the government gives the providers. The government requires the data in the format they want. Section 215 required existing business records, which may have provided providers a way to be obstinate about how they delivered the data (and this may have led to the government’s problems with the cell phone data). But it also says this (in the paragraph providing for compensation I wrote about here):
The Government may provide any information, facilities, or assistance necessary to aid an electronic communications service provider in complying with a directive
Remember, one month ago, Keith Alexander said he’d be willing to trade a phone dragnet fix for what amounts to the ability to partner with industry on cybersecurity. The limits on this bill to electronic communication service providers means it’s not precisely what Alexander wanted (I understand him to want that kind of broad partnership across industries). Still, the endorsement of the government basically going to camp out at a provider makes me wonder if there isn’t some of that. Note, that also may answer my question about when and where NSA would conduct the pizza joint analysis, which would mean there’d still be NSA techs (or contractors) rifling through raw data, but they’d be doing it at the telecoms’ location.
The First Amendment restriction appears more limited than it is in the Section 215 context, though I suspect RuppRoge simply reflects the reality of what NSA is doing now. Both say you can’t investigate an American solely for First Amendment views, but RuppRoge says you can’t get the information for an investigation of an American. Given that RuppRoge eliminates any requirement that this collection be tied to an investigation, it would make it very easy to query a US person selector based on First Amendment issues in the guise of collecting information for another reason. But again, I suspect that’s what the NSA is doing in practice in any case.
Note, too, that RuppRoge borrows the “significant purpose” language from FISA, meaning the government can have a domestic law enforcement goal to getting these records.
RuppRoge then lays out an elaborate certification/directive system that is (as I guessed) modeled on the FISA Amendments Act, but written to be even more Byzantine in the bill. It works the same, though: the Attorney General and the Director of National Intelligence submit broad certifications to the FISC, which reviews whether they comply with the general requirements in the bill. It can also get emergency orders (though for some reason here, as elsewhere, RuppRoge have decided to invent new words from the standard ones), though the language is less about emergency and more about timely acquisition of data. Ultimately, there is judicial review, after the fact, except that like FAA, the review is programmatic, not identifier specific. Significantly, the records the government has to keep only need to comply with selection procedures (which are the new name for targeting procedures) “at the time the directive was issued,” which would seem to eliminate any need to detask over a year if you discover the target isn’t actually in contact with an agent of a foreign power. Also, in the clause permitting the FISC to order data be destroyed if the directives were improper, the description talks about halting production of “records,” but destruction of “information.” That might be more protective (including the destruction of reports based on data) or it might not (requiring only the finished reports be destroyed). Interestingly, this section includes no language affirmatively permitting alert systems, though RuppRoge have made it clear that’s what they intend with the year long certifications. In addition, those year long certifications might be used in conjunction with a year long PRISM order to first search a provider for metadata, then immediately task on content (which would be useful in a cybersecurity context).
The bill also changed the language of minimization procedures, which they call “civil liberties and privacy protection procedures.” Interestingly, the procedures differ from the standard in Section 215, including both a generalized privacy protection and one limiting receipt and dissmenation of “records associated with a specific person.” These might actually be more protective than those in Section 215, or they might not, given that the identifying information (at b1D) excludes things like phone number or email which clearly identify a specific person, but get no protection (this identifying information hearkens back, at least in part, to debates about whether the dragnet minimization procedures complied with requirement for them in law on this point). In other words, it may provide people more protection, but given the NSA’s claim that they can’t get identify from a phone number, they likely don’t consider that data to be protected at all.
I can’t help believing much of this bill was written with cases like Lavabit and the presumed Credo NSL challenges in mind, as it uses language disdainful of legal challenges.
If the judge determines that such petition consists of claims, defenses, or other legal contentions that are not warranted by existing law or consists of a frivolous argument for extending, modifying, or reversing existing law or for establishing new law, the judge shall immediately deny such petition and affirm the directive or any part of the directive that is the subject of the such petition and order the recipient to comply with the directive or any part of it.
This seems to completely rule out any constitutional challenge to this law from providers. Though the bill even allows for emergency acquisition while FISC is reviewing a certification, suggesting RuppRoge don’t want the FISC to make any through either. So if this bill were to pass, you can be sure it will remain in place indefinitely.