USA Freedom Act Scofflaw Rosemary Collyer Claims She Can’t Find a Tech Expert

I say this a lot: for a privacy person, I’m actually pretty willing to defend the work of the so-called rubber stamp FISA Court. I’ve reported on some areas — such as location data — where FISC does or at least use to — require a higher standard of legal process than criminal courts. And I’ve described the diligent efforts various judges — Reggie Walton, especially, but also Colleen Kollar-Kotelly, Thomas Hogan and John Bates — have made to get NSA to follow the law. That doesn’t mean the court is the way the US should oversee programmatic spying, but it does a better job than usually given credit for.

Not so Rosemary Collyer, whom I predicted would be an awful presiding judge before she got the position. That prediction was proven right in this year’s shitty 702 reauthorization. I laid out at more length here how in that opinion, Collyer failed to use the levers Bates had created for the court to ensure the NSA follows the law.

But on top of failing to use the tools her predecessors put in place to ensure that FISA (and her court) remains the exclusive means to conduct domestic foreign intelligence surveillance, Collyer did something even more trouble. She failed to consult an amicus — or explain why she didn’t need to — in the process of approving back door searches to be used with collection she knew to include domestic communications. By failing to do that, I have argued, she broke the law, failing to fulfill the requirements of amicus review or explanation mandated by the USA Freedom Act.

I laid all that out here, too, in a post reporting on the request from a bunch of Senators that FISC appoint a technical amicus. As I noted, if Collyer isn’t going to consult amici, then having a tech amicus available isn’t going to help (and had she consulted the most obvious amicus earlier this year, Marc Zwillinger, he likely would have raised the import of the technical questions she seemed not to understand).

I didn’t realize it but Collyer responded late last month. (h/t Cryptome) She made a remarkably lame excuse for not appointing any tech amici.

We are now actively seeking technical experts who can also act as amici curiae. However, it has not proved to be a simple matter to find appropriate technical expertise. In considering technical advisors we must assess their abilities and qualifications, including their eligibility for security clearances and willingness to abide by attendant obligations regarding reporting of foreign contacts and pre-publication review (which is concerning to some potential candidates). As a result, we expect the process of finding a pool of appropriate technical amici to take some time to complete. Nonetheless, please be assured that this matter is very much on our minds and the court is engaged in continuing outreach.

As I pointed out in my first post on this, Steve Bellovin — who had been selected (and I believe cleared) to serve as technical advisor to PCLOB would be available given the effective demise of that body. Bellovin co-authored an important paper on precisely the issue Collyer dodged in her upstream opinion: where metadata ends and content begins in a packet.

So I’m pretty unsympathetic with Collyer’s claims the FISC simply can’t find appropriate technical experts, or couldn’t here.

Of course, had she not broken the law — had she at least appointed an amicus for April’s opinion — one of them might have offered up Bellovin’s name or a number of other cleared experts.

So it’s nice she’s paying lip service to the kind of technical expertise that might have helped her avoid the problems in this year’s 702 reauthorization.

But given her other actions, it’s hard to believe it is anything but lip service.

Share this entry

Facebook Doesn’t Need a Probable Cause Search Warrant to Turn Over Ad Data to Robert Mueller

People are shooting off their baby cannons in excitement with the news that Facebook turned over information to Robert Mueller that they didn’t turn over to Congress. The excitement comes, apparently, from the perception that if Mueller got more stuff than Congress, he must have gotten a probable cause search warrant, something implied — but not at all stated affirmatively — in this WSJ article.

Facebook Inc.  has handed over to special counsel Robert Mueller detailed records about the Russian ad purchases on its platform that go beyond what it shared with Congress last week, according to people familiar with the matter.

The information Facebook shared with Mr. Mueller included copies of the ads and details about the accounts that bought them and the targeting criteria they used, the people familiar with the matter said. Facebook policy dictates that it would only turn over “the stored contents of any account,” including messages and location information, in response to a search warrant, some of them said.

A search warrant from Mr. Mueller would mean the special counsel now has a powerful tool in his arsenal to probe the details of how social media was used as part of a campaign of Russian meddling in the U.S. presidential election. Facebook hasn’t shared the same information with Congress in part because of concerns about disrupting the Mueller probe, and possibly running afoul of U.S. privacy laws, people familiar with the matter said.

CNN similarly asserts that Mueller would need a warrant, without actually reporting any confirmation from Facebook that that’s what has happened.

Facebook gave Mueller and his team copies of ads and related information it discovered on its site linked to a Russian troll farm, as well as detailed information about the accounts that bought the ads and the way the ads were targeted at American Facebook users, a source with knowledge of the matter told CNN.

The disclosure, first reported by the Wall Street Journal, may give Mueller’s office a fuller picture of who was behind the ad buys and how the ads may have influenced voter sentiment during the 2016 election.

Facebook did not give copies of the ads to members of the Senate and House intelligence committees when it met with them last week on the grounds that doing so would violate their privacy policy, sources with knowledge of the briefings said. Facebook’s policy states that, in accordance with the federal Stored Communications Act, it can only turn over the stored contents of an account in response to a search warrant.

“We continue to work with the appropriate investigative authorities,” Facebook said in a statement to CNN.

Even in the criminal context, it’s not at all clear Mueller would need a probable cause search warrant. Here’s what WSJ and CNN said Facebook gave Mueller:

  • Copies of ads (which according to some reports, Facebook showed, but did not leave, with Congress)
  • Details about the accounts that bought them
  • Targeting criteria used to buy them

Both WSJ and CNN take from these details that Facebook treats these things — which are what the Internet Research Association and other fake subscribers included in their communications conducting an advertising transaction with Facebook — as “stored contents of an account” or “messages and location information.”

Given that these are communications with Facebook, not with the fake subscribers’ fake friends, it’s not at all clear that’s this would count as content. Here’s what Facebook gets asked for (and presumably delivers) in response to a 2703(d) order on an average real American, like Reality Winner.

A. The following information about the customers or subscribers of the Account:
1. Names (including subscriber names, user names, and screen names);
2. Addresses (including mailing addresses, residential addresses, business addresses, and e-mail addresses);
3. Local and long distance telephone connection records;
4. Records of session times and durations, and the temporarily assigned network addresses (such as Intemet Protocol (“IP”) addresses) associated with those sessions;
5. Length of service (including start date) and types of service utilized;
6. Telephone or instrument numbers (including MAC addresses);
7. Other subscriber numbers or identities (including temporarily assigned network addresses and registration Intemet Protocol (“IP”) addresses (including carrier grade natting addresses or ports)); and
8. Means and source of payment for such service (including any credit card or bank account number) and billing records.

B. All records and other information (not including the contents of communications) relating to the Account, including:
1. Records of user activity for each connection made to or from the Account, including log files; messaging logs; the date, time, length, and method of connections; data transfer volume; user names; and source and destination Intemet Protocol addresses;
2. Information about each communication sent or received by tbe Account, including tbe date and time of the communication, the method of communication, and the source and destination of the communication (such as source and destination email addresses, IP addresses, and telephone numbers). Records of any accounts registered with the same email address, phone number(s), method(s) of payment, or IP address as either of the accounts listed in Part I; and
3. Records of any accounts that are linked to either of the accounts listed in Part I by machine cookies (meaning all Facebook/Instagram user IDs that logged into any Facebook/Instagram account by the same machine as either of the accounts in Part I).

What would “all records and other information” relating to the account entail for an ad purchaser? After all, the fake account is not posting the ad, Facebook is. The fake account is using Facebook targeting criteria — again, communicating with Facebook, not its fake friends.

And if this is how Mueller got the Facebook data, it would be available with approval from a grand jury (and we know he’s got several grand juries lying around), with a relevance — not a probable cause — standard.

And that’s only if you’re talking criminal context. WSJ and CNN refer to Facebook’s privacy policy, which for legal reasons doesn’t cite all the ways they turn over data. In assuming that Mueller had to use a search warrant, both outlets are ignoring another obvious authority: Section 702.

We’re talking accounts believed (by both Facebook and the government) to be run by the Internet Research Association. The Intelligence Community Assessment on Russian tampering states, even in the unclassified version, that they believe IRA has ties to Russian intelligence.

  • The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close Putin ally with ties to Russian intelligence.

But even without that, we’re talking a foreign corporation engaging in activity that everyone involved agrees has foreign intelligence value, with most people claiming that they knowingly took part in an intelligence influence operation run by Russian spooks.

That’s solidly in the realm of what gets tasked, all the time, under Section 702’s Foreign Government certificate. Hell, using 702, Mueller could get the contents of the messages sent by the fake accounts to their fake friends, as well as anything else private in their accounts (and a whole lot more).

And the standard for 702 is not probable cause, it is foreigner (including foreign corporation) located overseas of foreign intelligence purpose.

I know everyone badly wants to assume Mueller has indictments in his back pocket, and so therefore are seeing criminal probable cause where there may be none (and where none is required). But both of these articles make certain assumptions about how Facebook treats ad transactions and, making those assumptions, rule out the 2703(d) order. And both of these articles are ignoring the availability of everything in IRA’s accounts — content or no — under Section 702.

Update: I believe these misleading leaks are coming from Congress, rather than from Facebook or Mueller. Note, for example, this WSJ explanation for why Facebook gave Mueller more than they gave Congress:

Facebook hasn’t shared the same information with Congress in part because of concerns about disrupting the Mueller probe, and possibly running afoul of U.S. privacy laws, people familiar with the matter said.

The concern about disrupting the Mueller probe would not be Facebook’s. It’d be Mueller and Congress’.

With that in mind, consider this article, from Bloomberg, which I also found sketchy. It claims that Mueller’s investigation has a “red-hot” focus on social media.

Russia’s effort to influence U.S. voters through Facebook and other social media is a “red-hot” focus of special counsel Robert Mueller’s investigation into the 2016 election and possible links to President Donald Trump’s associates, according to U.S. officials familiar with the matter.

Mueller’s team of prosecutors and FBI agents is zeroing in on how Russia spread fake and damaging information through social media and are seeking additional evidence from companies like Facebook and Twitter about what happened on their networks, said one of the officials, who asked not to be identified discussing the ongoing investigation.

It relies on two US officials, a common moniker for members of Congress or their staffers. And the article goes on to quote both Richard Burr and Mark Warner.

Intelligence Committee Chairman Richard Burr, a North Carolina Republican, said Tuesday that it’s “probably more a question of when” than if there will be a hearing with Facebook officials as part of his panel’s probe. Mark Warner, the committee’s top Democrat and a former telecommunications company founder, said Facebook’s revelation appears to be “the tip of the iceberg. I think there’s going to be much more.”

“This is the Wild, Wild West,” Warner said.

Warner has made no secret, for weeks, he wants more focus on the social media side of this. But Burr, here, seems to be reflecting the same considerations he does elsewhere: timing, which for him has been driven by ensuring the committee collects enough evidence to prepare before speaking to witnesses, and deference to Mueller’s investigation.

But consider the rest of the article, which suggests that Mueller’s investigation is going full steam after social media.

That’s pretty hard to square with the fact that Twitter hadn’t even considered doing a report until Facebook delivered theirs, which was provided voluntarily. And Google has done nothing yet, in spite of concerns about Russians exploiting YouTube.

Twitter Inc. is also expected to speak to congressional investigators in the coming weeks about Russian activity on its platform, said Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee last week. A spokeswoman for Twitter declined to comment on whether the company had received any warrants or handed anything over related to possible Russian ad buys.

Alphabet Inc.’s Google unit said in a statement, “We’re always monitoring for abuse or violations of our policies and we’ve seen no evidence this type of ad campaign was run on our platforms.” A person familiar with the matter said the company hasn’t been called to testify on the topic.

In other words, if Mueller is interested in social media, that interest is no longer than 10 days old, and did not drive Facebook’s reporting (though Mueller would have intelligence from the intelligence community, on top of whatever Facebook provided).

I think Warner wants Burr’s agreement to subpoena these providers now, which would permit SSCI to obtain the same stuff Mueller did. And if, in an effort to apply that pressure, Warner or his minions are telling journalists that Mueller got more because he used legal process, it would leave it to journalists to interpret what kind of (legally gagged, probably) process Mueller used. Which might result in precisely the kind of story we got: journalists reporting it involved a warrant based on their interpretation of how Facebook treats ad purchases.

Share this entry

Coats v. Wyden, the Orwellian Reclassification Edition

Back on June 7, Ron Wyden asked a question similar to the one he asked James Clapper in 2013: “Can the government use FISA 702 to collect communications it knows are entirely domestic?” As Clapper did 4 years before ,Coats denied that it could. “Not to my knowledge. It would be against the law.”

The claim was particularly problematic, given that less than two months earlier, Coats had signed a Section 702 certificate that admitted that the NSA would acquire entirely domestic communications via upstream collection.

When I asked ODNI about Coats’ comment, they responded by citing FISA.

Section 702(b)(4) plainly states we “may not intentionally acquire any communication as to which the sender and all intended recipients are known at the time of acquisition to be located in the United States.” The DNI interpreted Senator Wyden’s question to ask about this provision and answered accordingly.

On June 15, Wyden — as he had in 2013 — insisted that Coats answer the question he asked, not the one that made for easy public assurances.

That was not my question. Please provide a public response to my question, as asked at the June 7, 2017 hearing.

After Wyden asked a few more times — again, as happened in 2013 — Coats provided a classified response on July 24. On September 1, however, Coats wrote Wyden stating that,

After consulting with the relevant intelligence agencies, I concluded that releasing the information you are asking to be made public would cause serious damage to national security. To that end, I provided you a comprehensive classified response to your question on July 24.

[snip]

While I recognize your goal of an unclassified response, given the need to include classified information to fully address your question, the classified response provided on July 24 stands as our response on this matter.”

Wyden is … unsatisfied … with this response.

It is hard to view Director Coats’ behavior as anything other than an effort to keep Americans in the dark about government surveillance. I asked him a simple, yes-or-no question: Can the government use FISA Act Section 702 to collect communications it knows are entirely domestic?

What happened was almost Orwellian. I asked a question in an open hearing. No one objected to the question at the time. Director Coats answered the question. His answer was not classified. Then, after the fact, his press office told reporters, in effect, Director Coats was answering a different question.

I have asked Director Coats repeatedly to answer the question I actually asked. But now he claims answering the question would be classified, and do serious damage to national security.

The refusal of the DNI to answer this simple yes-no question should set off alarms. How can Congress reauthorize this surveillance when the administration is playing games with basic questions about this program?

This is on top of the administration’s recent refusal even to estimate how many Americans’ communications are swept up under this program.

The Trump administration appears to have calculated that hiding from Americans basic information relevant to their privacy is the easiest way to renew this expansive surveillance authority. The executive branch is rejecting a fundamental principle of oversight by refusing to answer a direct question, and saying that Americans don’t deserve to know when and how the government watches them.

Significantly, in the midst of this back-and-forth about targeting, Wyden and Coats were engaged in a parallel back-and-forth about counting how many US persons are impacted by Section 702. In a letter sent to Coats on August 3, Wyden suggested that it might be easier for NSA to count how many people located in the US are affected by Section 702.

First, whatever challenges there may be arriving at an estimate of U.S. persons whose communications have been collected under Section 702, those challenges may not apply equally to persons located in the United States. I believe that the impact of Section 702 on persons inside the United States would constitute a “relevant metric,” and that your conclusion that an estimate can and should be revisited on that basis.

So effectively, Coats is willing to say publicly that the NSA can’t knowingly target entirely domestic communications, but it does knowingly collect entirely domestic communications. But he’s unwilling to explain how or why it continues to do so in the wake of ending “about” collection.

And in the middle of Coats’ non-admission, Wyden challenged him to come up with a count of how many people in America are affected by Section 702, which would presumably include those incidentally collected because they were communicated with a target, but also these entirely domestic communications that Coats admits exist but won’t explain.

I’ll try to explain in a follow-up what I think this is about.

Share this entry

UNITEDRAKE and Hacking under FISA Orders

As I noted yesterday, along with the encrypted files you have to pay for, on September 6, Shadow Brokers released the manual for an NSA tool called UNITEDRAKE.

As Bruce Schneier points out, the tool has shown up in released documents on multiple occasions — in the catalog of TAO tools leaked by a second source (not Snowden) and released by Jacob Appelbaum, and in three other Snowden documents (one, two, three) talking about how the US hacks other computers, all of which first appeared in Der Spiegel’s reporting (one, two, three). [Update: See ElectroSpaces comments about this Spiegel reporting and its source.]

The copy, as released, is a mess — it appears to have been altered by an open source graphics program and then re-saved as a PDF. Along with classification marks, the margins and the address for the company behind it appears to have been altered.

The NSA is surely doing a comparison with the real manual (presumably as it existed at the time it may have been stolen) in an effort to understand how and why it got manipulated.

I suspect Shadow Brokers released it as a message to those pursuing him as much as to entice more Warez sales, for the observations I lay out below.

The tool permits NSA hackers to track and control implants, doing things like prioritizing collection, controlling when an implant calls back and how much data is collected at a given time, and destroying an implant and the associated UNITEDRAKE code (PDF 47 and following includes descriptions of these functions).

It includes doing things like impersonating the user of an implanted computer.

Depending on how dated this manual is, it may demonstrate that Shadow Brokers knows what ports the NSA will generally use to hack a target, and what code might be associated with an implant.

It also makes clear, at a time when the US is targeting Russia’s use of botnets, that the NSA carries out its own sophisticated bot-facilitated collection.

Finally of particular interest to me, the manual shows that UNITEDRAKE can be used to hack targets of FISA orders.

To use it to target people under a FISA order, the NSA hacker would have to enter both the FISA order number and the date the FISA order expires. After that point, UNITEDRAKE will simply stop collecting off that implant.

Note, I believe that — at least in this deployment — these FISA orders would be strictly for use overseas. One of the previous references to UNITEDRAKE describes doing a USSID-18 check on location.

SEPI analysts validate the target’s identity and location (USSID-18 check), then provide a deployment list to Olympus operators to load a more sophisticated Trojan implant (currently OLYMPUS, future UNITEDRAKE).

That suggests this would be exclusively EO 12333 collection — or collection under FISA 704/705(b) orders.

But the way in which UNITEDRAKE is used with FISA is problematic. Note that it doesn’t include a start date. So the NSA could collect data from before the period when the court permitted the government to spy on them. If an American were targeted only under Title I (permitting collection of data in motion, therefore prospective data), they’d automatically qualify for 705(b) targeting with Attorney General approval if they traveled overseas. Using UNITEDRAKE on — say, the laptop they brought with them — would allow the NSA to exfiltrate historic data, effectively collecting on a person from a time when they weren’t targeted under FISA. I believe this kind of temporal problem explains a lot of the recent problems NSA has had complying with 704/705(b) collection.

In any case, Shadow Brokers may or may not have UNITEDRAKE among the files he is selling. But what he has done by publishing this manual is tell the world a lot of details about how NSA uses implants to collect intelligence.

And very significantly for anyone who might be targeted by NSA hacking tools under FISA (including, presumably, him), he has also made it clear that with the click of a button, the NSA can pretend to be the person operating the computer. This should create real problems for using data hacked by NSA in criminal prosecutions.

Except, of course, especially given the provenance problems with this document, no defendant will ever be able to use it to challenge such hacking.

Share this entry
[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

FBI Imagines Using Assessments to Recruit US Engineers for Insight onto Spying in Semiconductor Industry

For something else, I’m reviewing the section of the FBI Domestic Investigations and Operations Guide on assessments made available in unredacted form to the Intercept. Of particular interest are the scenarios the DIOG uses to explain whether an Agent would or could use an assessment to collect information without opening a preliminary investigation. One way the FBI uses assessments is to identify potential informants. As one of the scenarios for when it might do so, it uses the example of trying to find out about a particular country X’s targeting of engineers and high tech workers involved in the production of semiconductor chips. For an engineer who travels frequently to country X, the FBI might either target him, or try to recruit him. (see page 117)

This is important for two reasons. First, the FBI is permitted to search FBI’s own databases to conduct this assessment. That would include information collected via Section 702. So when people talk about the risks of back door searches, it could mean a completely innocent engineer getting targeted for recruitment as an informant.

The other reason this is important is because it is precisely what appears to have happened with Professor Xiaoxing Xi, who was falsely accused of sharing semiconductor technology with China. After Xi and his attorney Peter Zeidenberg explained to the FBI that they had badly misunderstood the technology they were looking at, the case against Xi was dismissed.

In fact, Xi claims in a lawsuit against the government that the emails on which was the case was built were improperly searched using Section 702 or EO 12333.

On information and belief, both before and after obtaining the FISA orders, defendant Haugen and/or Doe(s) caused the interception of Professor Xi’s communications, including his emails, text messages, and/or phone calls, without obtaining a warrant from any court. In conducting this surveillance, the defendants may have relied on the purported authority of Section 702 of FISA or Executive Order 12333. Although neither Section 702 nor Executive Order 12333 permits the government to “target” Americans directly, the government nonetheless relies on these authorities to obtain without a warrant the communications of Americans who are in contact with individuals abroad, as Professor Xi was with his family and in the course of his scientific and academic work.

On information and belief, defendant Haugen and/or defendant Does searched law enforcement databases for communications of Professor Xi that the government had intercepted without a warrant, including his private communications intercepted under Section 702 of FISA and Executive Order 12333, and examined, retained, and/or used such communications.

[snip]

The actions of defendants Haugen and/or Doe(s) in searching law enforcement databases for, examining, retaining, and using Professor Xi’s communications, including his emails, text messages, and/or phone calls, that were obtained without a warrant, and without notice to Professor Xi, violated Professor Xi’s clearly established constitutional rights against unlawful search and seizure and his right to privacy under the Fourth Amendment.

Given how closely this scenario matches his own case, I’d say the chances his emails were first identified via a back door search are quite high. Note, too, that Temple University, where he works, has its email provided by Google, meaning these emails might be available on via PRISM.

Of additional interest, the one description of sensitive potential Confidential Human Sources that the officially released DIOG redacts which is revealed in the Intercept copy is academic personnel. (see page 112)

So they will recruit professors like Professor Xi as informants — they would just require special approval to do so.

Share this entry
[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

The 702 Compliance Reporting

This will be a very weedy post on two quarterly reports on 702 compliance released to ACLU under FOIA: March 2014, March 2015; the March reports both cover the December 1 through February 28 period. ACLU obtained them not by FOIAing quarterly compliance reporting directly. Rather, ACLU asked for all the documents referred in this Summary of Notable Section 702 Requirements, which they had received earlier. But the released copies are entirely useless in elucidating the Notable Requirements. The 2015 report, for example, was provided in part to explain how NSA assesses whether a selector will provide foreign intelligence information, but the section of the report that details with it (item 28 on page 46) has been withheld entirely (see break between PDF 8 and 9). In addition, there must be at least one more citation to it that is redacted in the Notable Requirements document. The reference(s) to the 2014 report are entirely redacted.

There are a few places such redacted references to the two reports might be: There’s a missing citation in Pre- and Post-Tasking Due Diligence (the redaction at the bottom of 2). There may be a citation missing in the continued assessment section at the bottom of page 4. There’s definitely one missing in the Obligation to Review section (page 5). There’s likely to be one in the long redacted passage on page 6 pertaining to resolving post-tasking problems as quickly as possible. And the sole footnote (see page 11) in the Summary has a reference, which is likely one on FBI techniques to analyze Section 702 information the government identified as being withheld in its entirety.

So the Compliance reports don’t help us — at all — to understand the requirements the government places on itself with respect to 702.

But they do show us, in more granular detail than show up in the Semiannual reports (this one includes the March 2014 period and this one includes the March 2015 period), the kinds of things that show up in the compliance reviews. The compliance reporting in both is generally organized in to the same sections (see page 29):

  • Tasking Issues
  • Detasking Issues
  • Notification Delays
  • Documentation Issues
  • Overcollection
  • Minimization
  • Other

And — as the Semiannual Report makes clear — we’re just seeing a fraction of the granular descriptions in the quarterly reports, because we’re not seeing the tasking, detasking, notification, or documentation issues. That means the unredacted content in the released reports represents less than 20% of the total number of compliance incidents for these two quarters.

Though we may be able to use the reports in conjunction to identify how many selectors, on average, are tasked at any given time. If the 25 minimization issues cited in the March 2015 report are representative (meaning there’d be 50 for the entire six month period), then there’d be roughly 338 incidents across all topics for the six month period (it’s not entirely clear how they deal with overlap). Given a compliance rate of .35% per average facilities tasked, this means roughly 96,571 facilities tasked at any given time, thought that may be low given the vastly different lead times on these reports (meaning in the interim year, the government might ID many more compliance issues that get reported primarily in the Semiannual report). There were 94,368 targets across the whole year in FY 2015 (which covers this entire period because the Fiscal Year begins in October). What that suggests is that for some targets, you’ll have more than one facility tasked at any given time, but unless there’s a lot of turnover in a given year (meaning that most targets are only tasked for some weeks or months), not that many.

Which leaves us with what the reports do show us: the other (largely dissemination) and minimization (largely overly broad queries and US person queries) compliance errors, errors which I’ve roughly tallied in this document.

Dissemination

Between the two quarterly reports, there are 13 incidences of what I’m lumping under improper dissemination (the report treats database dissemination differently from disseminating unmasked USP identities). Most of these are fairly non-descript, true error. In three cases, analysts at other agencies alerted the NSA that they had not masked a US person identity.

The exceptions are 2015-19 and -20, which are almost entirely redacted but pretty clearly deal with NSA sharing raw data with FBI and/or CIA improperly.

I find the second one — which includes no unredacted discussion of emergency detasking or other mitigation — to be the more alarming of the two. But in general, the possibility that NSA might mistakenly send FBI (especially) the wrong data is troubling because once things get to FBI they get far less direct scrutiny (both in terms of compliance reviews and in terms of auditing) than NSA gets. Sending the collection on an entire selector over to another agency is far more intrusive than sending over one unmasked name (though it’s not clear this raw data belonged to a US person). Plus, once things get to FBI they can start having repercussions.

Overbroad Queries

The overbroad queries are interesting not so much because they affect US persons directly (though they do in perhaps two cases), but for what they say about the querying process. Here’s what the 2015 Semiannual Report says about overbroad queries, which it acknowledges is a problem even while attributing the problem to errors in constructing Boolean queries.

(U) NSA’s minimization procedures require queries of Section 702-acquired data to be designed in a manner “reasonably likely to return foreign intelligence information.” Approximately 29% of the minimization errors in this reporting period involved non-compliance with this rule regarding queries (54% in the last reporting period).56 As with prior Joint Assessments, this is the cause of most compliance incidents involving NSA’s minimization procedures. These types of errors are typically traceable to a typographical or comparable error in the construction for the query. For example, an overbroad query can be caused when an analyst mistakenly inserts an “or” instead of an “and” in constructing a Boolean query, and thereby potentially received overbroad results as a result of the query. No incidents of an analyst purposely running a query for nonforeign intelligence reasons against Section 702-acquired data were identified during the reporting period, nor did any of the overbroad queries identified involve the use of a United States person identifier as a query term.

That generally accords with the most common description of the compliance errors: an analyst constructs a query poorly, recognizes as soon as she gets the results (presumably resulting in far more returned records than expected), someone (the reports as often as not don’t tell us who) deletes them, and it gets reported. There are a few incidents where analysts run multiple such queries before discovering the problem — that seems like more of a concern, as fat-fingering a Boolean connector shouldn’t explain it. I’m interested in the errors (2015-7, -8, and -9) where the redaction seems to suggest either some other kind of query or some embarrassment about disclosing that top secret method, Boolean search; it’s possible this pertains to XKS searches, which can also involve scripts. One of these overboard queries was done by a linguist (which given the Reality Winner case is interesting). There are also discrepancies about whether the analyst themselves discovered the problem or an auditor, the latter of which happened at least five times (two incidences don’t describe who discovered them). Finally, there are interesting differences in the description of the coaching that happens after an issue. Sometimes none is described. Most often, the report describes the analyst getting a talking to. But in a number of cases, “personnel,” which might be plural, get coaching. I’m interested in when more than one person would get such coaching.

Finally, consider what it means that most of these violations seem to involved multiple authorities, including 702. That’s not at all surprising: you’d want to track a target across all the collection you had on the person. But that also includes upstream 702, which may be part of the problem upstream became such a problem.

US Person Queries

Finally, there are the queries using US person identifiers that, for some reason, were improper under the guidelines first approved in 2011. As I’ve noted, these have been a consistent problem since at least 2013. The Semiannual Report acknowledges this, or at least the problems with searching upstream 702 data, which was prohibited in the 2011 guidelines.

(U) Additionally, as noted in prior Joint assessments, the joint oversight team believes NSA should assess modifications to systems used to query raw Section 702-acquired data to require analysts to identify when they believe they are using a United States person identifier as a query term. Such an improvement, even if it cannot be adopted universally in all NSA systems, could help prevent compliance instances with respect to the use of United States person query terms.59 NSA plans to test and implement this recommendation during calendar year 2016. The new internal compliance control mechanism being developed for NSA data repositories containing unevaluated and unminimized Section 702 information will require analysts to document whether the query being executed against the database includes a known United States person identifier. Once the query is executed, the details concerning the query will be passed to NSA’s auditing system of record for post-query review and potential metrics compilation. As part of the testing, NSA will evaluate the accuracy of reporting this number in future Joint Assessments.60

As you review the violations discovered in 2014 and 2015, remember that (as noted in the 2017 702 authorization), these results were in a period where NSA was just discovering far more pervasive problems with US person searches. As it is, in each quarter here, there were 10 or 11 inappropriate US person searches. In 2014, a number of those (2,5, 8, 17) were searches of 702 data using identifiers associated with US persons already targeted under Title I, 704, or 705(b). Just one (5) of the 2015 violations was approved for individual targeting, and that appears to be one of the earlier violations in the quarter (note it must have occurred in December 2014). That’s interesting, because this undated guideline on USP queries of 702 collections says any US person approved for individualized targeting or RAS (under the old phone dragnet) could be backdoor searched. It seems likely, then, they changed the policy in 2015 (which is particularly alarming, given that they did so just as NSA was moving towards discovering how bad their upstream searches were. In other words, they seem to have made legal one of the practices that was coming up as a violation.

These violation descriptions are also interesting for the (often redacted) specificity about the kind of selector used, sometimes described as email, telephony (which could include messaging), and in others as “facilities” (which might include cookies or IPs). That’s an indication of the range of identifiers under which you can search 702 data, which is in turn (because 702 searches are all supposed to derive from PRISM collection) a testament to the kinds of things that get turned over in PRISM returns.

Of the violations described, just one obviously pertains to the search on an identifier for which the authorization had expired. That’s interesting, because searches on expired warrants appeared far more frequently in past reports. Significantly, the IG Report reviewing compliance 704/705(b), which reviewed queries for two months that overlapped with the 2015 report at issue (January and February 2015; the compliance report included December 2014 whereas the IG Report included March 2015), did find persistent problems with expired authorizations, but in EO 12333 data (suggesting FISA queries might have fixed earlier such problems). But the discussion of these problems in Rosemary Collyer’s 702 reauthorization opinion shows that for one tool, 85% of 704/705(b) queries conducted from November 2015 through April 2016 — well after the later quarter covered here — were non-compliant. “Many of these non-compliant queries involved use of the same identifiers over different date ranges.” NSA was unable to segregate and destroy the improper queries. That’s perhaps unsurprising, because as late as April 2017, the NSA was still having difficulties identifying all the queries run against 702 data.

And in spite of the reports, from later 702 reporting that some of the 704/705(b) queries of 702 did not get included in auditing systems, a good number of these violations were not discovered by analysts (as often happened with improper queries) but by auditors, suggesting the violations may have had an impact on US persons.

All that said, there’s not all that much there there, aside from the sheer number (which the Semiannual report seems to think is just NSA’s serial refusal to fix the problem of default search settings). These two snap-shots of the 702 upstream query problem, capturing 702 collection in the period immediately before it started to blow up, are also an indication of how much ODNI/DOJ’s oversight of NSA (which is far more rigorous than the oversight than the same agencies give CIA and especially FBI) was missing.

Share this entry
[Photo: National Security Agency via Wikimedia]

If a Tech Amicus Falls in the Woods but Rosemary Collyer Ignores It, Would It Matter?

Six senators (Ron Wyden, Pat Leahy, Al Franken, Martin Heinrich, Richard Blumenthal, and Mike Lee) have just written presiding FISA Court judge Rosemary Collyer, urging her to add a tech amicus — or even better, a full time technical staffer — to the FISA Court.

The letter makes no mention of Collyer’s recent consideration of the 702 reauthorization certificates, nor even of any specific questions the tech amicus might consider.

That’s unfortunate. In my opinion, the letter entirely dodges the real underlying issue, at least as it pertains to Collyer, which is her unwillingness to adequately challenge or review Executive branch assertions.

In her opinion reauthorizing Section 702, Collyer apparently never once considered appointing an amicus, even a legal one (who, under the USA Freedom structure, could have suggested bringing in a technical expert). She refused to do so in a reconsideration process that — because of persistent problems arising from technical issues — stretched over seven months.

I argued then that that means Collyer broke the law, violating USA Freedom Act’s requirement that the FISC at least consider appointing an amicus on matters raising novel or significant issues and, if choosing not to do so, explain that decision.

In any case, this opinion makes clear that what should have happened, years ago, is a careful discussion of how packet sniffing works, and where a packet collected by a backbone provider stops being metadata and starts being content, and all the kinds of data NSA might want to and does collect via domestic packet sniffing. (They collect far more under EO 12333.) As mentioned, some of that discussion may have taken place in advance of the 2004 and 2010 opinions approving upstream collection of Internet metadata (though, again, I’m now convinced NSA was always lying about what it would take to process that data). But there’s no evidence the discussion has ever happened when discussing the collection of upstream content. As a result, judges are still using made up terms like MCTs, rather than adopting terms that have real technical meaning.

For that reason, it’s particularly troubling Collyer didn’t use — didn’t even consider using, according to the available documentation — an amicus. As Collyer herself notes, upstream surveillance “has represented more than its share of the challenges in implementing Section 702” (and, I’d add, Internet metadata collection).

At a minimum, when NSA was pitching fixes to this, she should have stopped and said, “this sounds like a significant decision” and brought in amicus Amy Jeffress or Marc Zwillinger to help her think through whether this solution really fixes the problem. Even better, she should have brought in a technical expert who, at a minimum, could have explained to her that SCTs pose as big a problem as MCTs; Steve Bellovin — one of the authors of this paper that explores the content versus metadata issue in depth — was already cleared to serve as the Privacy and Civil Liberties Oversight Board’s technical expert, so presumably could easily have been brought into consult here.

That didn’t happen. And while the decision whether or not to appoint an amicus is at the court’s discretion, Collyer is obligated to explain why she didn’t choose to appoint one for anything that presents a significant interpretation of the law.

A court established under subsection (a) or (b), consistent with the requirement of subsection (c) and any other statutory requirement that the court act expeditiously or within a stated time–

(A) shall appoint an individual who has been designated under paragraph (1) to serve as amicus curiae to assist such court in the consideration of any application for an order or review that, in the opinion of the court, presents a novel or significant interpretation of the law, unless the court issues a finding that such appointment is not appropriate;

For what it’s worth, my guess is that Collyer didn’t want to extend the 2015 certificates (as it was, she didn’t extend them as long as NSA had asked in January), so figured there wasn’t time. There are other aspects of this opinion that make it seem like she just gave up at the end. But that still doesn’t excuse her from explaining why she didn’t appoint one.

Instead, she wrote a shitty opinion that doesn’t appear to fully understand the issue and that defers, once again, the issue of what counts as content in a packet.

Without even considering an amicus, Collyer for the first time affirmatively approved the back door searches of content she knows will include entirely domestic communications, effectively affirmatively permitting the NSA to conduct warrantless searches of entirely domestic communications, and with those searches to use FISA for domestic surveillance. In approving those back door searches, Collyer did not conduct her own Fourth Amendment review of the practice.

Moreover, she adopted a claimed fix to a persistent problem — the collection of domestic communications via packet sniffing — without showing any inkling of testing whether the fix accomplished what it needed to. Significantly, in spite of 13 years of problems with packet sniffing collection under FISA, the court still has no public definition about where in a packet metadata ends and content begins, making her “abouts” fix — a fix that prohibits content sniffing without defining content — problematic at best.

I absolutely agree with these senators that the FISC should have its own technical experts.

But in Collyer’s case, the problem is larger than that. Collyer simply blew off USA Freedom Act’s obligation to consider an amicus entirely. Had she appointed Marc Zwillinger, I’m confident he would have raised concerns about the definition of content (as he did when he served as amicus on a PRTT application), whether or not he persuaded her to bring in a technical expert to further lay out the problems.

Collyer never availed herself of the expertise of Zwillinger or any other independent entity, though. And she did so in defiance of the intent of Congress, that she at least explain why she felt she didn’t need such outside expertise.

And she did so in an opinion that made it all too clear she really, really needed that help.

In my opinion, Collyer badly screwed up this year’s reauthorization certificates, kicking the problems created by upstream collection down the road, to remain a persistent FISA problem for years to come. But she did so by blowing off the clear requirement of law, not because she didn’t have technical expertise to rely on (though the technical expertise is probably necessary to finally resolve the issues raised by packet sniffing).

Yet no one but me — not even privacy advocates testifying before Congress — want to call her out for that.

Congress already told the FISA court they “shall” ask for help if they need it. Collyer demonstrably needed that help but refused to consider using it. That’s the real problem here.

I agree with these senators that FISC badly needs its own technical experts. But a technical amicus will do no good if, as Collyer did, a FISC judge fails to consult her amici.

Share this entry
[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

Did NSA Start Using Section 702 to Collect from VPNs in 2014?

I’ve finally finished reading the set of 702 documents I Con the Record dumped a few weeks back. I did two posts on the dump and a related document Charlie Savage liberated. Both pertain, generally, to whether a 702 “selector” gets defined in a way that permits US person data to be sucked up as well. The first post reveals that, in 2010, the government tried to define a specific target under 702 (both AQAP and WikiLeaks might make sense given the timing) as including US persons. John Bates asked for legal justification for that, and the government withdrew its request.

The second reveals that, in 2011, as Bates was working through the mess of upstream surveillance, he asked whether the definition of “active user,” as it applies for a multiple communication transaction, referred to the individual user. The question is important because if a facility is defined to be used by a group — say, Al Qaeda or Wikileaks — it’s possible a user of that facility might be an unknown US person user, the communications of which would only be segregated under the new minimization procedures if the individual user’s communication were reviewed (not that it mattered in the end; NSA doesn’t appear to have implemented the segregation regime in meaningful fashion). Bates never got a public answer to that question, which is one of a number of reasons why Rosemary Collyer’s April 26 702 opinion may not solve the problem of upstream collection, especially not with back door searches permitted.

As it happens, some of the most important documents released in the dump may pertain to a closely related issue: whether the government can collect on selectors it knows may be used by US persons, only to weed out the US persons after the fact.

In 2014, a provider challenged orders (individual “Directives” listing account identifiers NSA wanted to collect) that it said would amount to conducting surveillance “on the servers of a U.S.-based provider” in which “the communications of U.S. persons will be collected as part of such surveillance.” The provider was prohibited from reading the opinions that set the precedent permitting this kind of collection. Unsurprisingly, the provider lost its challenge, so we should assume that some 702 collection collects US person communications, using the post-tasking process rather than pre-targeting intelligence to protect American privacy.

The documents

The documents that lay out the failed challenge are:

2014, redacted date: ACLU Document 420: The government response to the provider’s filing supporting its demand that FISC mandate compliance.

2014, redacted date: EFF Document 13: The provider(s) challenging the Directives asked for access to two opinions the government relied on in their argument. Rosemary Collyer refused to provide them, though they have since been released.

2014, redacted date: EFF Document 6 (ACLU 510): Unsurprisingly, Collyer also rejected the challenge to the individual Directives, finding that post-tasking analysis could adequately protect Americans.

The two opinions the providers requested, but were refused, are:

September 4, 2008 opinion: This opinion, by Mary McLaughlin, was the first approval of FAA certifications after passage of the law. It lays out many of the initial standards that would be used with FAA (which changed slightly from PAA). As part of that, McLaughin adopted standards regarding what kinds of US person collection would be subject to the minimization procedures.

August 26, 2014 opinion: This opinion, by Thomas Hogan, approved the certificates under which the providers had received Directives (which means the challenge took place between August and the end of 2014). But the government also probably relied on this opinion for a change Hogan had just approved, permitting NSA to remain tasked on a selector even if US persons also used the selector.

The argument also relies on the October 3, 2011 John Bates FAA opinion and the August 22, 2008 FISCR opinion denying Yahoo’s challenge to Protect America Act. The latter was released in a second, less redacted form on September 11, 2014, which means the challenge likely post-dated that release.

The government’s response

The government’s response consists of a filing by Stuart Evans (who has become DOJ’s go-to 702 hawk) as well as a declaration submitted by someone in NSA that had already reviewed some of the taskings done under the 2014 certificates (which again suggests this challenge must date to September at the earliest). There appear to be four sections to Evans’ response. Of those sections, the only one left substantially unredacted — as well as the bulk of the SIGINT declaration — pertains to the Targeting Procedures. So while targeting isn’t the only thing the provider challenged (another appears to be certification of foreign intelligence value), it appears to be the primary thing.

Much of what is unredacted reviews the public details of NSA’s targeting procedure. Analysts have to use the totality of circumstances to figure out whether someone is a non US person located overseas likely to have foreign intelligence value, relying on things like other SIGINT, HUMINT, and (though the opinion redacts this) geolocation information and/or filters to weed out known US IPs. After a facility has been targeted, the analyst is required to do post-task analysis, both to make sure that the selector is the one intended, but also to make sure that no new information identifies the selector as being used by a US person, as well as making sure that the target hasn’t “roamed” into the US. Post-task analysis also ensures that the selector really is providing foreign intelligence information (though in practice, per PCLOB and other sources, this is not closely reviewed).

Of particular importance, Evans dismisses concerns about what happens when a selector gets incorrectly tasked as a foreigner. “That such a determination may later prove to be incorrect because of changes in circumstances or information of which the government was unaware does not render unreasonable either the initial targeting determination or the procedures used to reach it.”

Evans also dismisses the concern that minimization procedures don’t protect the providers’ customers (presumably because they provide four ways US person content may be retained with DIRNSA approval). Relying on the 2008 opinion that states in part…

The government argues that, by its terms, Section 1806(i) applies only to a communication that is unintentionally acquired,” not to a communication that is intentionally acquired under a mistaken belief about the location or non-U.S. person status of the target or the location of the parties to the communication. See Government’s filing of August 28, 2008. The Court finds this analysis of Section 1806(i) persuasive, and on this basis concludes that Section 1806(i) does not require the destruction of the types of communications that are addressed by the special retention provisions.”

Evans then quotes McClaughlin judging that minimization procedures “constitute a safeguard against improper use of information about U.S. persons that is inadvertently or incidentally acquired.” In other words, he cites an opinion that permits the government to treat stuff that is initially targeted, even if it is later discovered to be an American’s communication, differently than it does other US person information as proof the minimization procedures are adequate.

The missing 2014 opinion references

As noted above, the provider challenging these Directives asked for both the 2008 opinion (cited liberally throughout the unredacted discussion in the government’s reply) and the 2014 one, which barely appears at all beyond the initial citation.  Given that Collyer reviewed substantial language from both opinions in denying the provider’s request to obtain them, the discussion must go beyond simply noting that the 2014 opinion governs the Directives in question. There must be something in the 2014 opinion, probably the targeting procedures, that gets cited in the vast swaths of redactions.

That’s especially true given that on the first page of Evans’ response claims the Directives address “a critical, ongoing foreign intelligence gap.” So it makes sense that the government would get some new practice approved in that year’s certification process, then serve Directives ostensibly authorized by the new certificate, only to have a provider challenge a new type of request and/or a new kind of provider challenge their first Directives.

One thing stands out in the 2014 opinion that might indicate the closing of a foreign intelligence gap.

Prior to 2014, the NSA could say an entity — say, Al Qaeda — used a facility, meaning they’d suck up any people that used that facility (think how useful it would be to declare a chat room a facility, for example). But (again, prior to 2014) as soon as a US person started “using” that facility — the word use here is squishy as someone talking to the target would not count as “using” it, but as incidental collection — then NSA would have to detask.

The 2014 certifications for the first time changed that.

The first revision to the NSA Targeting Procedures concerns who will be regarded as a “target” of acquisition or a “user” of a tasked facility for purposes of those procedures. As a general rule, and without exception under the NSA targeting procedures now in effect, any user of a tasked facility is regarded as a person targeted for acquisition. This approach has sometimes resulted in NSA’ s becoming obligated to detask a selector when it learns that [redacted]

The relevant revision would permit continued acquisition for such a facility.

[snip]

For purposes of electronic surveillance conducted under 50 U.S.C. §§ 1804-1805, the “target” of the surveillance ‘”is the individual or entity … about whom or from whom information is sought.”‘ In re Sealed Case, 310 F.3d 717, 740 (FISA Ct. Rev. 2002) (quoting H.R. Rep. 95-1283, at 73 (1978)). As the FISC has previously observed, “[t]here is no reason to think that a different meaning should apply” under Section 702. September 4, 2008 Memorandum Opinion at 18 n.16. It is evident that the Section 702 collection on a particular facility does not seek information from or about [redacted].

In other words, for the first time in 2014, the FISC bought off on letting the NSA target “facilities” that were used by a target as well as possibly innocent Americans, based on the assumption that the NSA would weed out the Americans in the post-tasking process, and anyway, Hogan figured, the NSA was unlikely to read that US person data because that’s not what they were interested in anyway.

Mind you, in his opinion approving the practice, Hogan included a bunch of mostly redacted language pretending to narrow the application of this language.

This amended provision might be read literally to apply where [redacted]

But those circumstances fall outside the accepted rationale for this amendment. The provision should be understood to apply only where [redacted]

But Hogan appears to be policing this limiting language by relying on the “rationale” of the approval, not any legal distinction.

The description of this change to tasking also appears in a 3.5 page discussion as the first item in the tasking discussion in the government’s 2014 application, which Collyer would attach to her opinion.

Collyer’s opinion

Collyer’s opinion includes more of the provider’s arguments than the Reply did. It describes the Directives as involving “surveillance conducted on the servers of a U.S.-based provider” in which “the communications of U.S. person will be collected as part of such surveillance.” (29) It says [in Collyer’s words] that the provider “believes that the government will unreasonably intrude on the privacy interests of United States persons and persons in the United States [redacted] because the government will regularly acquire, store, and use their private communications and related information without a foreign intelligence or law enforcement justification.” (32-3) It notes that the provider argued there would be “a heightened risk of error” in tasking its customers. (12) The provider argued something about the targeting and minimization procedures “render[ed] the directives invalid as applied to its service.” (16) The provider also raised concerns that because the NSA “minimization procedures [] do not require the government to immediately delete such information[, they] do not adequately protect United States person.” (26)

All of which suggests the provider believed that significant US person data would be collected off their servers without any requirement the US person data get deleted right away. And something about this provider’s customers put them at heightened risk of such collection, beyond (for example) regular upstream surveillance, which was already public by the time of this challenge.

Collyer, too, says a few interesting things about the proposed surveillance. For example, she refers to a selector as an “electronic communications account” as distinct from an email — a rare public admission from the FISC that 702 targets things beyond just emails. And she treats these Directives as an “expansion of 702 acquisitions” to some new provider or technology. Finally, Collyer explains that “the 2014 Directives are identical, except for each directive referencing the particular certification under which the directive is issued.” This means that the provider received more than one Directive, and they fall under more than one certificate, which means that the collection is being used for more than one kind of use (counterterrorism, counterproliferation, and foreign government plus cyber). So the provider is used by some combination of terrorists, proliferators, spies, or hackers.

Ultimately, though, Collyer rejected the challenge, finding the targeting and minimization procedures to be adequate protection of the US person data collected via this new approach.

Now, it is not certain that all this relied on the new targeting procedure. Little in Collyer’s language reflects passing familiarity with that new provision. Indeed, at one point she described the risk to US persons to involve “the government may mistakenly task the wrong account,” which suggests a more individualized impact.

Except that after her almost five pages entirely redacted of discussion of the provider’s claim that the targeting procedures are insufficient, Collyer argues that such issues don’t arise that frequently, and even if they do, they’d be dealt with in post-targeting analysis.

The Court is not convinced that [redacted] under any of the above-described circumstances occurs frequently, or even on a regular basis. Assuming arguendo that such scenarios will nonetheless occur with regard to selectors tasked under the 2014 Directives, the targeting procedures address each of the scenarios by requiring NSA to conduct post-targeting analysis [redacted]

Similarly, Collyer dismissed the likelihood that Americans’ data would be tasked that often.

[O]ne would not expect a large number of communications acquired under such circumstances to involve United States person [citation to a redacted footnote omitted]. Moreover, a substantial proportion of the United States person communications acquired under such circumstances are likely to be of foreign intelligence value.

As she did in her recent shitty opinion, Collyer appears to have made these determinations without requiring NSA to provide real numbers on past frequency or likely future frequency.

However often such collection had happened in the past (which she didn’t ask the NSA to explain) or would happen as this new provider started responding to Directives, this language does sound like it might implicate the new case of a selector that might be used both by legitimate foreign intelligence targets and by innocent Americans.

Does the government use 702 collection to obtain VPN traffic?

As I noted, it seems likely, though not certain, that the new collection exploited the new permission to keep tasking a selector even if US persons were using it, in addition to the actual foreigners targeted. I’m still trying to puzzle this through, but I’m wondering if the provider was a VPN provider, being asked to hand over data as it passed through the VPN server. (I think the application approved in 2014 would implicate Tor traffic as well, but I can’t see how a Tor provider would challenge the Directives, unless it was Nick Merrill again; in any case, there’d be no discussion of an “account” with Tor in the way Collyer uses it).

What does this mean for upstream surveillance

In any case, whether my guesstimates about what this is are correct, the description of the 2014 change and the discussion about the challenge would seem to raise very important questions given Collyer’s recent decision to expand the searching of upstream collection. While the description of collection from a provider’s server is not upstream, it would seem to raise the same problems, the collection of a great deal of associated US person collection that could later be brought up in a search. There’s no hint in any of the public opinions that such problems were considered.

Share this entry

When NSA Talks about Unintended Consequences, You Need to Ask a Follow-Up Question

In yesterday’s hearing on Section 702 reauthorization, Dianne Feinstein asked all DOJ, FBI, and NSA whether they opposed a statutory prohibition on “about” searches.

DOJ’s Stuart Evans falsely claimed that the FISC has found “about” collection to be legal; that’s not true given the assumption — which has proven out in practice — that NSA would do back door searches on the resulting domestic communications that result. Indeed, both judges who considered whether collecting and searching MCTs including domestic communications was constitutional, John Bates and Rosemary Collyer, called it a Fourth Amendment problem.

But I’m more interested in NSA Deputy General Counsel for Operations Paul Morris’ answer.

Morris: NSA opposes a statutory change at this point because that would box us in and possibly have unintended consequences.

Feinstein: Are you saying you would oppose this?

Morris: Oppose, right, we don’t think it would be a good idea at this time.

Feinstein: Huh. Thank you. That answers my question.

When the NSA complains preemptively about being “boxed in” to prevent a practice the FISC has found constitutionally problematic, it ought to elicit a follow-up question. Why doesn’t the NSA want to be prohibited from an activity that is constitutionally suspect?

More importantly, especially given that “abouts” collection is currently not defined in a way that has any technical meaning, Feinstein should have followed up to ask about what “unintended consequences” Morris worried about. Morris’ comment leads me to believe my suspicion — that the NSA continues to do things that have the same effect as “abouts” collection, even if they don’t reach into the “content” of emails that are only a subset of the kinds of things that get collected using upstream collection — is correct. It seems likely that Morris wants to protect collection that would violate any meaningful technical description of “abouts.”

Which suggests the heralded “end” to “abouts” collection is no such thing, it’s just the termination of one kind of collection that sniffs into content layers of packets.


Links to all posts on yesterday’s 702 hearing:

NSA talks about unintended consequences … no one asks what they might be

NSA argues waiting 4 years before dealing with systematic violations is not a lack of candor

FBI’s can only obtain raw feeds on selectors “relevant to” a full investigation

Everyone claims an FBI violation authorized by MOU aren’t willful 

Even amicus fans neglect to mention Rosemary Collyer violated USAF in not considering one

 

Share this entry

Confirmed: The FISA Court Is Less of a Rubber Stamp than Article III Courts

Although Rosemary Collyer’s recent 702 opinion has made me rethink my position, I’ve long argued that the FISA Court gets a bad rap when it is called a rubber stamp.

But today, for the first time, we can test that claim. Today is the first time we have had US Court reports for for an entire year for both the FISC and for Article III Courts — as close as we can get to comparing apples to apples.

The FISC report showed that that court denied in full 8 of 1485 individual US based applications, at a rate of .5%, along with partially denying or modifying a significant number of others.

The Article III report showed that out of 3170 requests, state and federal courts denied just 2 requests.

A total of 3,168 wiretaps were reported as authorized in 2016, compared with 4,148 the previous year. Of those, 1,551 were authorized by federal judges, compared with 1,403 in 2015. A total of 1,617 wiretaps were authorized by state judges, compared with 2,745 in 2015. Two wiretap applications were reported as denied in 2016.

That’s a denial rate of .06%.

And remember, just 336 or so of the FISA orders target Americans, whereas the majority of the Article III warrants would target Americans.

None of that diminishes the potential privacy implications of either kind of warrant. Indeed, the relative ease with which Article III courts grant warrants may invite — as the differential standards for location data already have — FBI to use criminal courts when a FISC order would be too hard to obtain.

But if people are worried about rubber stamp courts, they probably need to focus more closely on the magistrate courts in their backyard.

Update: Swapped Article for Title because I was being an idiot. Thanks to JT for nagging.

Update: We get complaints from one of everyone’s favorite magistrates, Stephen Smith.

Please remind your devoted readers that federal magistrate judges do not issue wiretaps. That fun task is reserved for the federal article III judges with lifetime appointments. We do issue all the other electronic surveillance orders and warrants, but unfortunately no stats are kept by anyone on our grants/denials/modifications. DOJ does keep track of pen/traps obtained, but of course the judge’s role on those is purely clerical–we don’t review the evidence, but merely check to see that the application is signed by the AUSA and in proper form. Some of us are working on the MJ warrant reporting issue, which is a pet peeve of mine. But I do not think it fair to tar all federal magistrate judges with the rubber stamp label, especially not based on the wiretap numbers with which we have nothing to do.

Corrected accordingly, and my apologies to the magistrates I’ve maligned.

 

Share this entry