Deep Links

Syndicate content
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 4 days 22 hours ago

California: No Face Recognition on Body-Worn Cameras

Mon, 06/10/2019 - 18:43

EFF has joined a coalition of civil rights and civil liberties organizations to support a California bill that would prohibit law enforcement from applying face recognition and other biometric surveillance technologies to footage collected by body-worn cameras.

About five years ago, body cameras began to flood into police and sheriff departments across the country. In California alone, the Bureau of Justice Assistance provided more than $7.4 million in grants for these cameras to 31 agencies. The technology was pitched to the public as a means to ensure police accountability and document police misconduct. However, if enough cops have cameras, a police force can become a roving surveillance network, and the thousands of hours of footage they log can be algorithmically analyzed, converted into metadata, and stored in searchable databases.

Today, we stand at a crossroads as face recognition technology can now be interfaced with body-worn cameras in real time. Recognizing the impending threat to our fundamental rights, California Assemblymember Phil Ting introduced A.B. 1215 to prohibit the use of face recognition, or other forms of biometric technology, such as gait recognition or tattoo recognition, on a camera worn or carried by a police officer.

“The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights,” the lawmaker writes in the introduction to the bill. “This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.”

Ting’s bill has the wind in its sails. The Assembly passed the bill with a 45-17 vote on May 9, and only a few days later the San Francisco Board of Supervisors made history by banning government use of face recognition. Meanwhile, law enforcement face recognition has come under heavy criticism at the federal level by the House Oversight Committee and the Government Accountability Office.

The bill is now before the California Senate, where it will be heard by the Public Safety Committee on Tuesday, June 11.

EFF, along with a coalition of civil liberties organizations including the ACLU, Advancing Justice - Asian Law Caucus, CAIR California, Data for Black Lives, and a number of our Electronic Frontier Alliance allies have joined forces in supporting this critical legislation.

Face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies conducting face surveillance often rely on images pulled from mugshot databases, which include a disproportionate number of people of color due to racial discrimination in our criminal justice system. So face surveillance will exacerbate historical biases born of, and contributing to, unfair policing practices in Black and Latinx neighborhoods.

Polling commissioned by the ACLU of Northern California in March of this year shows the people of California, across party lines, support these important limitations. The ACLU's polling found that 62% of respondents agreed that body cameras should be used solely to record how police treat people, and as a tool for public oversight and accountability, rather than to give law enforcement a means to identify and track people. In the same poll, 82% of respondents said they disagree with the government being able to monitor and track a person using their biometric information.

Last month, Reuters reported that Microsoft rejected an unidentified California law enforcement agency’s request to apply face recognition to body cameras due to human rights concerns.

“Anytime they pulled anyone over, they wanted to run a face scan,” Microsoft President Brad Smith said. “We said this technology is not your answer.”

We agree that ubiquitous face surveillance is a mistake, but we shouldn’t have to rely on the ethical standards of tech giants to address this problem. Lawmakers in Sacramento must use this opportunity to prevent the threat of mass biometric surveillance from becoming the new normal. We urge the California Senate to pass A.B. 1215.

Categories: Privacy

Five California Cities Are Trying to Kill an Important Location Privacy Bill

Mon, 06/10/2019 - 18:36

If you rely on shared biked or scooters, your location privacy is at risk. Cities across the United States are currently pushing companies that operate shared mobility services like Jump, Lime, and Bird to share individual trip data for any and all trips taken within their boundaries, including where and when trips start and stop and granular details about the specific routes taken. This data is extremely sensitive, as it can be used to reidentify riders—particularly for habitual trips—and to track movements and patterns over time. While it is beneficial for cities to have access to aggregate data about shared mobility devices to ensure that they are deployed safely, efficiently, and equitably, cities should not be allowed to force operators to turn over sensitive, personally identifiable information about riders.

As these programs become more common, the California Legislature is considering a bill, A.B. 1112, that would ensure that local authorities receive only aggregated or non-identifiable trip data from shared mobility providers. EFF supports A.B. 1112, authored by Assemblymember Laura Friedman, which strikes the appropriate balance between protecting individual privacy and ensuring that local authorities have enough information to regulate our public streets so that they work for all Californians. The bill makes sure that local authorities will have the ability to impose deployment requirements in low-income areas to ensure equitable access, fleet caps to decrease congestion, and limits on device speed to ensure safety. And importantly, the bill clarifies that CalEPCA—California’s landmark electronic privacy law—applies to data generated by shared mobility devices, just as it would any other electronic devices.

Five California cities, however, are opposing this privacy-protective legislation. At least four of these cities—Los Angeles, Santa Monica, San Francisco, and Oakland—have pilot programs underway that require shared mobility companies to turn over sensitive individual trip data as a condition to receiving a permit. Currently, any company that does not comply cannot operate in the city. The cities want continued access to individual trip data and argue that removing “customer identifiers” like names from this data should be enough to protect rider privacy.

The problem? Even with names stripped out, location information is notoriously easy to reidentify, particularly for habitual trips. This is especially true when location information is aggregated over time. And the data shows that riders are, in fact, using dockless mobility vehicles for their regular commutes. For example, as documented in Lime’s Year End Report for 2018, 40 percent of Lime riders reported commuting to or from work or school during their most recent trip. And remember, in the case of dockless scooters and bikes, these devices may be parked directly outside a rider’s home or work. If a rider used the same shared scooter or bike service every day to commute between their work and home, it’s not hard to imagine how easy it might be to reidentify them—even if their name was not explicitly connected to their trip data. Time-stamped geolocation data could also reveal trips to medical specialists, specific places of worship, and particular neighborhoods or bars. Patterns in the data could reveal social relationships, and potentially even extramarital affairs, as well as personal habits, such as when people typically leave the house in the morning, go to the gym or run errands, how often they go out on evenings and weekends, and where they like to go.

The cities claim that they will institute “technical safeguards” and “business processes” to prohibit reidentification of individual consumers, but so long as the cities have the individual trip data, reidentification will be possible—by city transportation agencies, law enforcement, ICE, or any other third parties that receive data from cities.

The cities’ promises to keep the data confidential and make sure the records are exempt from disclosure under public records laws also fall flat. One big issue is that the cities have not outlined and limited the specific purposes for which they plan to use the geolocation data they are demanding. They also have not delineated how they will minimize their collection of personal information (including trip data) to data necessary to achieve those objectives. This violates both the letter and the spirit of the California Constitution’s right to privacy, which explicitly lists privacy as an inalienable right of all people and, in the words of the California Supreme Court, “prevents government and business interests from collecting and stockpiling unnecessary information about us” or “misusing information gathered for one purpose in order to serve other purposes[.]”

The biggest mistake local jurisdictions could make would be to collect data first and think about what to do with it later—after consumers’ privacy has been put at risk. That’s unfortunately what cities are doing now, and A.B. 1112 will put a stop to it.

The time is ripe for thoughtful state regulation reining in local demands for individual trip data. As we’ve told the California legislature, bike- and scooter- sharing services are proliferating in cities across the United States, and local authorities should have the right to regulate their use. But those efforts should not come at the cost of riders’ privacy.

We urge the California legislature to pass A.B. 1112 and protect the privacy of all Californians who rely on shared mobility devices for their transportation needs. And we urge cities in California and across the United States to start respecting the privacy of riders. Cities should start working with regulators and the public to strike the right balance between their need to obtain data for city planning purposes and the need to protect individual privacy—and they should stop working to undermine rider privacy.

Categories: Privacy

EFF and Open Rights Group Defend the Right to Publish Open Source Software to the UK Government

Mon, 06/10/2019 - 15:57

EFF and Open Rights Group today submitted formal comments to the British Treasury, urging restraint in applying anti-money-laundering regulations to the publication of open-source software.

The UK government sought public feedback on proposals to update its financial regulations pertaining to money laundering and terrorism in alignment with a larger European directive. The consultation asked for feedback on applying onerous customer due diligence regulations to the cryptocurrency space as well as what approach the government should take in addressing “privacy coins” like Zcash and Monero. Most worrisome, the government also asked “whether the publication of open-source software should be subject to [customer due diligence] requirements.”

We’ve seen these kind of attacks on the publication of open source software before, in fights dating back to the 90s, when the Clinton administration attempted to require that anyone merely publishing cryptography source code obtain a government-issued license as an arms dealer. Attempting to force today’s open-source software publishers to follow financial regulations designed to go after those engaged in money laundering is equally obtuse.

In our comments, we describe the breadth of free, libre, and open source software (FLOSS) that benefits the world today across industries and government institutions. We discuss how these regulatory proposals could have large and unpredictable consequences not only for the emerging technology of the blockchain ecosystem, but also for the FLOSS software ecosystem at large. As we stated in our comments:

If the UK government was to determine that open source software publication should be regulated under money-laundering regulations, it would be unclear how this would be enforced, or how the limits of those falling under the regulation would be determined. Software that could, in theory, provide the ability to enable cryptocurrency transactions, could be modified before release to remove these features. Software that lacked this capability could be quickly adapted to provide it. The core cryptographic algorithms that underlie various blockchain implementations, smart contract construction and execution, and secure communications are publicly known and relative trivial to express and implement. They are published, examined and improved by academics, enthusiasts, and professionals alike…

The level of uncertainty this would provide to FLOSS use and provision within the United Kingdom would be considerable. Such regulations would burden multiple industries to attempt to guarantee that their software could not be considered part of the infrastructure of a cryptographic money-laundering scheme.

Moreover, source code is a form of written creative expression, and open source code is a form of public discourse. Regulating its publication under anti-money-laundering provisions fails to honor the free expression rights of software creators in the United Kingdom, and their collaborators and users in the rest of the world.

Source code is a form of written creative expression, and open source code is a form of public discourse.

EFF is monitoring the regulatory and legislative reactions to new blockchain technologies, and we’ve recently spoken out about misguided ideas for banning cryptocurrencies and overbroad regulatory responses to decentralized exchanges. Increasingly, the regulatory backlash against cryptocurrencies is being tied to overbroad proposals that would censor the publication of open-source software, and restrict researchers’ ability to investigate, critique and communicate about the opportunities and risks of cryptocurrency.

This issue transcends controversies surrounding blockchain tech and could have significant implications for technological innovation, academic research, and freedom of expression. We’ll continue to watch the proceedings with HM Treasury, but fear similar anti-FLOSS proposals could emerge—particularly as other member states of the European Union transpose the same Anti-Money Laundering Directive into their own laws.

Read our full comments.

Thanks to Marta Belcher, who assisted with the comments. 

Categories: Privacy

Hearing Tuesday: EFF Will Voice Support For California Bill Reining In Law Enforcement Use of Facial Recognition

Mon, 06/10/2019 - 15:20
Assembly Bill 1215 Would Bar Police From Adding Facial Scanning to Body-Worn Cameras

Sacramento, California—On Tuesday, June 11, at 8:30 am, EFF Grassroots Advocacy Organizer Nathan Sheard will testify before the California Senate Public Safety Committee in support of a measure to prohibit law enforcement from using facial recognition in body cams.

Following San Francisco’s historic ban on police use of the technology—which can invade privacy, chill free speech and disproportionately harm already marginalized communities—California lawmakers are considering AB 1215, proposed legislation that would extend the ban across the state.

Face recognition technology has been shown to have disproportionately high error rates for women, the elderly, and people of color. Making matters worse, law enforcement agencies often rely on images pulled from mugshot databases. This exacerbates historical biases born of, and contributing to, over-policing in Black and Latinx neighborhoods. The San Francisco Board of Supervisors and other Bay Area communities have decided that police should be stopped from using the technology on the public.

The utilization of face recognition technology in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion, Sheard will tell lawmakers.

Hearing before the California Senate Public Safety Committee on SB 1215

EFF Grassroots Advocacy Organizer Nathan Sheard

Tuesday, July 11, 8:30 am

California State Capitol
10th and L Streets
Room 3191
Sacramento, CA  95814

Contact:  Nathan 'nash'SheardGrassroots Advocacy
Categories: Privacy

Adversarial Interoperability: Reviving an Elegant Weapon From a More Civilized Age to Slay Today's Monopolies

Fri, 06/07/2019 - 14:24

Today, Apple is one of the largest, most profitable companies on Earth, but in the early 2000s, the company was fighting for its life. Microsoft's Windows operating system was ascendant, and Microsoft leveraged its dominance to ensure that every Windows user relied on its Microsoft Office suite (Word, Excel, Powerpoint, etc). Apple users—a small minority of computer users—who wanted to exchange documents with the much larger world of Windows users were dependent on Microsoft's Office for the Macintosh operating system (which worked inconsistently with Windows Office documents, with unexpected behaviors like corrupting documents so they were no longer readable, or partially/incorrectly displaying parts of exchanged documents). Alternatively, Apple users could ask Windows users to export their Office documents to an "interoperable" file format like Rich Text Format (for text), or Comma-Separated Values (for spreadsheets). These, too, were inconsistent and error-prone, interpreted in different ways by different programs on both Mac and Windows systems.

Apple could have begged Microsoft to improve its Macintosh offerings, or they could have begged the company to standardize its flagship products at a standards body like OASIS or ISO. But Microsoft had little motive to do such a thing: its Office products were a tremendous competitive advantage, and despite the fact that Apple was too small to be a real threat, Microsoft had a well-deserved reputation for going to enormous lengths to snuff out potential competitors, including both Macintosh computers and computers running the GNU/Linux operating system.

Apple did not rely on Microsoft's goodwill and generosity: instead, it relied on reverse-engineering. After its 2002 "Switch" ad campaign—which begged potential Apple customers to ignore the "myths" about how hard it was to integrate Macs into Windows workflows—it intensified work on its iWork productivity suite, which launched in 2005, incorporating a word-processor (Pages), a spreadsheet (Numbers) and a presentation program (Keynote). These were feature-rich applications in their own right, with many innovations that leapfrogged the incumbent Microsoft tools, but this superiority would still not have been sufficient to ensure the adoption of iWork, because the world's greatest spreadsheets are of no use if everyone you need to work with can't open them.

What made iWork a success—and helped re-launch Apple—was the fact that Pages could open and save most Word files; Numbers could open and save most Excel files; and Keynote could open and save most PowerPoint presentations. Apple did not attain this compatibility through Microsoft's cooperation: it attained it despite Microsoft's noncooperation. Apple didn't just make an "interoperable" product that worked with an existing product in the market: they made an adversarially interoperable product whose compatibility was wrested from the incumbent, through diligent reverse-engineering and reimplementation. What's more, Apple committed to maintaining that interoperability, even though Microsoft continued to update its products in ways that temporarily undermined the ability of Apple customers to exchange documents with Microsoft customers, paying engineers to unbreak everything that Microsoft's maneuvers broke. Apple's persistence paid off: over time, Microsoft's customers became dependent on compatibility with Apple customers, and they would complain if Microsoft changed its Office products in ways that broke their cross-platform workflow.

Since Pages' launch, document interoperability has stabilized, with multiple parties entering the market, including Google's cloud-based Docs offerings, and the free/open alternatives from LibreOffice. The convergence on this standard was not undertaken with the blessing of the dominant player: rather, it came about despite Microsoft's opposition. Docs are not just interoperable, they're adversarially interoperable: each has its own file format, but each can read Microsoft's file format.

The document wars are just one of many key junctures in which adversarial interoperability made a dominant player vulnerable to new entrants:

Scratch the surface of most Big Tech giants and you'll find an adversarial interoperability story: Facebook grew by making a tool that let its users stay in touch with MySpace users; Google products from search to Docs and beyond depend on adversarial interoperability layers; Amazon's cloud is full of virtual machines pretending to be discrete CPUs, impersonating real computers so well that the programs running within them have no idea that they're trapped in the Matrix.

Adversarial interoperability converts market dominance from an unassailable asset to a liability. Once Facebook could give new users the ability to stay in touch with MySpace friends, then every message those Facebook users sent back to MySpace—with a footer advertising Facebook's superiority—became a recruiting tool for more Facebook users. MySpace served Facebook as a reservoir of conveniently organized potential users that could be easily reached with a compelling pitch about why they should switch.

Today, Facebook is posting 30-54% annual year-on-year revenue growth and boasts 2.3 billion users, many of whom are deeply unhappy with the service, but who are stuck within its confines because their friends are there (and vice-versa).

A company making billions and growing by double-digits with 2.3 billion unhappy customers should be every investor's white whale, but instead, Facebook and its associated businesses are known as "the kill zone" in investment circles.

Facebook's advantage is in "network effects": the idea that Facebook increases in value with every user who joins it (because more users increase the likelihood that the person you're looking for is on Facebook). But adversarial interoperability could allow new market entrants to arrogate those network effects to themselves, by allowing their users to remain in contact with Facebook friends even after they've left Facebook.

This kind of adversarial interoperability goes beyond the sort of thing envisioned by "data portability," which usually refers to tools that allow users to make a one-off export of all their data, which they can take with them to rival services. Data portability is important, but it is no substitute for the ability to have ongoing access to a service that you're in the process of migrating away from.

Big Tech platforms leverage both their users' behavioral data and the ability to lock their users into "walled gardens" to drive incredible growth and profits. The customers for these systems are treated as though they have entered into a negotiated contract with the companies, trading privacy for service, or vendor lock-in for some kind of subsidy or convenience. And when Big Tech lobbies against privacy regulations and anti-walled-garden measures like Right to Repair legislation, they say that their customers negotiated a deal in which they surrendered their personal information to be plundered and sold, or their freedom to buy service and parts on the open market.

But it's obvious that no such negotiation has taken place. Your browser invisibly and silently hemorrhages your personal information as you move about the web; you paid for your phone or printer and should have the right to decide whose ink or apps go into them.

Adversarial interoperability is the consumer's bargaining chip in these coercive "negotiations." More than a quarter of Internet users have installed ad-blockers, making it the biggest consumer revolt in human history. These users are making counteroffers: the platforms say, "We want all of your data in exchange for this service," and their users say, "How about none?" Now we have a negotiation!

Or think of the iPhone owners who patronize independent service centers instead of using Apple's service: Apple's opening bid is "You only ever get your stuff fixed from us, at a price we set," and the owners of Apple devices say, "Hard pass." Now it's up to Apple to make a counteroffer. We'll know it's a fair one if iPhone owners decide to patronize Apple's service centers.

This is what a competitive market looks like. In the absence of competitive offerings from rival firms, consumers make counteroffers by other means.

There is good reason to want to see a reinvigorated approach to competition in America, but it's important to remember that competition is enabled or constrained not just by mergers and acquisitions. Companies can use a whole package of laws to attain and maintain dominance, to the detriment of the public interest.

Today, consumers and toolsmiths confront a thicket of laws and rules that stand between them and technological self-determination. To change that, we need to reform the Computer Fraud and Abuse Act, Section 1201 of the Digital Millennium Copyright Act, , patent law, and other rules and laws. Adversarial interoperability is in the history of every tech giant that rules today, and if it was good enough for them in the past, it's good enough for the companies that will topple them in the future.

Categories: Privacy

Same Problem, Different Day: Government Accountability Office Updates Its Review of FBI’s Use of Face Recognition—and It’s Still Terrible

Thu, 06/06/2019 - 18:33

This week the federal Government Accountability Office (GAO) issued an update to its 2016 report on the FBI’s use of face recognition. The takeaway, which they also shared during a Congressional House Oversight Committee hearing: the FBI now has access to 641 million photos—including driver’s license and ID photos—but it still refuses to assess the accuracy of its systems.

According to the latest GAO Report, FBI’s Facial Analysis, Comparison, and Evaluation (FACE) Services unit not only has access to FBI’s Next Generation Identification (NGI) face recognition database of nearly 30 million civil and criminal mug shot photos, it also has access to the State Department’s Visa and Passport databases, the Defense Department’s biometric database, and the driver’s license databases of at least 21 states. Totaling 641 million images—an increase of 230 million images since GAO’s 2016 report—this is an unprecedented number of photographs, most of which are of Americans and foreigners who have committed no crimes.

The FBI Still Hasn’t Properly Tested the Accuracy of Its Internal or External Searches

Although GAO criticized FBI in 2016 for failing to conduct accuracy assessments of either its internal NGI database or the searches it conducts on its state and federal partners’ databases, the FBI has done little in the last three years to make sure that its search results are accurate, according to the new report. As of 2016, the FBI had conducted only very limited testing to assess the accuracy of NGI's face recognition capabilities. These tests only assessed the ability of the system to detect a match—not whether that detection was accurate, and as GAO notes, “reporting a detection rate of 86 percent without reporting the accompanying false positive rate presents an incomplete view of the system’s accuracy.”

As we know from previous research, face recognition is notoriously inaccurate across the board and may also misidentify African Americans and ethnic minorities, young people, and women at higher rates than whites, older people, and men, respectively. By failing to assess the accuracy of its internal systems, GAO writes—and we agree—that the FBI is also failing to ensure it is “sufficiently protecting the privacy and civil liberties of U.S. citizens enrolled in the database.” This is especially concerning given that, according to the FBI, they’ve run a massive 152,500 searches between fiscal year 2017 and April 2019—since the original report came out.

The FBI also has not taken any steps to determine whether the face recognition systems of its external partners—states and other federal agencies—are sufficiently accurate to prevent innocent people from being identified as criminal suspects. These databases, which are accessible to the FACE services unit, are mostly made up of images taken for identification, certification, or other non-criminal purposes. Extending their use to FBI investigations exacerbates concerns of accuracy, not least of which because, as GAO notes, the “FBI’s accuracy requirements for criminal investigative purposes may be different than a state’s accuracy requirements for preventing driver’s license fraud.” The FBI claims that it has no authority to set or enforce accuracy standards outside the agency. GAO disagrees: because the FBI is using these outside databases as a component of its routine operations, it is responsible for ensuring the systems are accurate, and given the lack of testing, it is unclear “whether photos of innocent people are unnecessarily included as investigative leads.”

Many of the 641 million face images to which the FBI has access are through 21 states’ driver’s license databases. 10 more states are in negotiations to provide similar access.

As the report points out, most of the 641 million face images to which the FBI has access—like driver’s license and passport and visa photos—were never collected for criminal or national security purposes. And yet, under agreements and “Memorandums of Understanding” we’ve never seen between the FBI and its state and federal partners, the FBI may search these civil photos whenever it’s trying to find a suspect in a crime. As the map above shows, 10 more states are in negotiations with the FBI to provide similar access to their driver’s license databases.

Images from the states’ databases aren’t only available through external searches. The states have also been very involved in the development of the FBI’s own NGI database, which includes nearly 30 million of the 641 million face images accessible to the Bureau (we’ve written extensively about NGI in the past). As of 2016, NGI included more than 20 million civil and criminal images received directly from at least six states, including California, Louisiana, Michigan, New York, Texas, and Virginia. And it’s not a way one-way street: it appears that five additional states—Florida, Maryland, Maine, New Mexico, and Arkansas—could send their own search requests directly to the NGI database. As of December 2015, the FBI was working with eight more states to grant them access to NGI, and an additional 24 states were also interested.

New Report, Same Criticisms

The original GAO report heavily criticized the FBI for rolling out these massive face recognition capabilities without ever explaining the privacy implications of its actions to the public, and the current report reiterates those criticisms. Federal law and Department of Justice policies require the FBI to complete a Privacy Impact Assessment (PIA) of all programs that collect data on Americans, both at the beginning of development and any time there’s a significant change to the program. While the FBI produced a PIA in 2008, when it first started planning out the face recognition component of NGI, it didn’t update that PIA until late 2015—seven years later and well after it began making the changes. It also failed to produce a PIA for the FACE Services unit until May 2015—three years after FACE began supporting FBI with face recognition searches.

Federal law and regulations also require agencies to publish a “System of Records Notice” (SORN) in the Federal Register, which announces any new federal system designed to collect and use information on Americans. SORNs are important to inform the public of the existence of systems of records; the kinds of information maintained; the kinds of individuals on whom information is maintained; the purposes for which they are used; and how individuals can exercise their rights under the Privacy Act. Although agencies are required to do this before they start operating their systems, FBI failed to issue one until May 2016—five years after it started collecting personal information on Americans. As GAO noted, the whole point of PIAs and SORNs is to give the public notice of the privacy implications of data collection programs and to ensure that privacy protections are built into systems from the start. The FBI failed at this.

This latest GAO report couldn’t come at a more important time. There is a growing mountain of evidence that face recognition used by law enforcement is dangerously inaccurate, from our white paper, “Face Off,” to two Georgetown studies released just last month which show that law enforcement agencies in some cities are implementing real-time face recognition systems and others are using the systems on flawed data.

Two years ago, EFF testified before The Congressional House Oversight Committee on the subject, pointing out the FBI's efforts to build up and link together these massive facial recognition databases that may be used to track innocent people as they go about their daily lives. The Congressional House Oversight Committee held two more hearings in the last month on the subject which saw bipartisan agreement over the need to rein in law enforcement’s use of this technology, and during which GAO pointed out many of the issues raised by this report. At least one more hearing is planned. As the Congressional House Oversight Committee continues to assess law enforcement use of face recognition databases, and as more and more cities are working to incorporate flawed and untested face recognition technology into their police and government-maintained cameras, we need all the information we can get on how law enforcement like the FBI are currently using face recognition and how they plan to use it in the future. Armed with that knowledge, we can push cities, states, and possibly even the federal government to pass moratoria or bans on the use of face recognition.

Categories: Privacy

30 Years Since Tiananmen Square: The State of Chinese Censorship and Digital Surveillance

Tue, 06/04/2019 - 18:29

Thirty years ago today, the Chinese Communist Party used military force to suppress a peaceful pro-democracy demonstration by thousands of university students. Hundreds (some estimates go as high as thousands) of innocent protesters were killed. Every year, people around the world come together to mourn and commemorate the fallen; within China, however, things are oddly silent.

The Tiananmen Square protest is one of the most tightly censored topics in China. The Chinese government’s network and social media censorship is more than just pervasive; it’s sloppy, overbroad, inaccurate, and always errs on the side of more takedowns. Every year, the Chinese government ramps up VPN shutdowns, activist arrests, digital surveillance, and social media censorship in anticipation of the anniversary of the Tiananmen Square protests. This year is no different; and to mark the thirtieth anniversary, the controls have never been tighter.

Keyword filtering on social media and messaging platforms

It’s a fact of life for many Chinese that social media and messaging platforms perform silent content takedowns via regular keyword filtering and more recently, image matching. In June 2013, Citizen Lab documented a list of words censored from social media related to the anniversary of the protests, which included words like “today” and “tomorrow.”

Since then, researchers at the University of Hong Kong have developed real-time censorship monitoring and transparency projects—“WeiboScope” and “WechatScope”—to document the scope and history of censorship on Weibo and Wechat. A couple of months ago, Dr. Fu King-wa, who works on these transparency projects, released an archive of over 1200 censored Weibo image posts relating to the Tiananmen anniversary since 2012. Net Alert has released a similar archive of historically censored images.

Simultaneous service disruptions for “system maintenance” across social media platforms

This year, there has been a sweep of simultaneous social media shutdowns a week prior to the anniversary, calling back to similar “Internet maintenance” shutdowns that happened during the twentieth anniversary of the Tiananmen Square protests. Five popular video and livestreaming platforms are suspending all comments until June 6th, citing the need for “system upgrades and maintenance.” Douban, a Chinese social networking service, is locking some of their larger news groups from any discussion June 29th, also for “system maintenance.” And popular messaging service WeChat recently blocked users from changing their status messages, profile pictures, and nicknames for the same reason.

Apple censors music and applications alike

Since 2017, Apple has removed VPNs from its mainland Chinese app store. These application bans have continued and worsened over time. A censorship transparency project by GreatFire,, allows users to look up which applications are available in the US but not in China. Apart from VPNs, the Chinese Apple app store has also censored applications from news organizations, including the New York Times, Radio Free Asia, Tibetan News, Voice of Tibet, and other Chinese-language human rights publications. They have also taken down other censorship circumvention tools like Tor and Psiphon.

Leading up to this year’s 30-year Tiananmen anniversary, Apple Music has been removing songs from its Chinese streaming service. A 1990 song by Hong Kong’s Jacky Cheung that references Tiananmen Square was removed, as were songs by pro-democracy activists from Hong Kong’s Umbrella Movement protest.

Activist accounts caught in Twitter sweep

On May 31st, a slew of China-related Twitter accounts were suspended, including prominent activists, human rights lawyers, journalists, and other dissidents. Activists feared this action was in preparation for further June 4th related censorship. Since then, some of the more prominent accounts have been restored, but many remain suspended. An announcement from Twitter claimed that these accounts weren’t reported by Chinese authorities, but were just caught up in a large anti-spam sweep.

The lack of transparency, poor timing, and huge number of false positives on Twitter’s part has led to real fear and uncertainty in Chinese-language activism circles.

Beyond Tiananmen Square: Chinese Censorship and Surveillance in 2019 Xinjiang, China’s ground zero for pervasive surveillance and social control

Thanks to work by Human Rights Watch, security researchers, and many brave investigators and journalists, a lot has come to light about China’s terrifying acceleration of social and digital controls in Xinjiang in the past two years. And the chilling effect is real—as we approach the end of Ramadan, a holiday which is discouraged and banned for Party members and public school students to observe—mosques remain empty. Uighur students and other expatriates abroad fear returning home, as many of their families have already been detained for no cause.

China’s extensive reliance on surveillance technology in Xinjiang is a human rights nightmare, and according to the New York Times, “the first known example of a government intentionally using artificial intelligence for racial profiling.” Researchers have noticed that more and more computer vision papers coming out of China are specifically trained to build facial recognition for Uighurs.

China has long been a master of security theater, overstating and over-performing its own surveillance capabilities in order to spread a “chilling effect” over digital and social behavior. Something similar is happening here, albeit at a much larger scale than we’ve ever seen before. Despite the government’s claims of fully automated and efficient systems, even the best automated facial recognition systems they use are only accurate in less than 20 percent of cases, leading to mistakes and the need for hundreds of workers to monitor cameras and confirm the results. These smoke-and-mirrors “pseudo-AI” systems are more than common in the AI startup industry. For a lot of “automated” technologies, we just aren’t quite there yet.

Resource or technical limitations aren’t going to stop the Chinese government. Security spending since 2017 shows that Chinese officials are serious about building a panopticon, no matter the cost. The development of the surveillance apparatus in Xinjiang shows us just how expensive building pervasive surveillance can be; local governments in Xinjiang have accrued hundreds of millions (in USD) of “invisible debt” as they continue to ramp up investment in their surveillance state. A large portion of that cost is labor. “We risk understating the extent to which this high-tech police state continues to require a lot of manpower,” says Adrien Zenz for the New York Times.

Client-side blocking of labor movements on Github

996 is a recent labor movement in China by white-collar tech workers who demand regular 40-hour work weeks and the explicit outlaw of the draconian but standard “996” schedule; that is, 9 am to 9 pm, six days a week. The movement, like other labor-organizing movements, has been to subject to keyword censorship on social media platforms, but individuals have been able to continue organizing on Github.

Github itself has remained relatively immune to Chinese censorship efforts. Thanks to widespread deployment of HTTPS, Chinese network operators must either block the entire website or nothing at all. Github was briefly blocked in 2013, but the backlash from developers was too great, and the site was unblocked shortly thereafter. China’s tech sector, like the rest of the world, rely on open-source projects hosted on the website. But although Github is no longer censored at the network level, Chinese-built browsers and Wechat’s web viewer started blacklisting specific URLs from being accessed, including the 996 Github repository.

Google’s sleeping Dragonfly

Late last year, we stood in solidarity with over 70 human rights groups led by Human Rights Watch and Amnesty International, calling on Google to end their secret internal project to architect a censored Chinese search engine codenamed Dragonfly. Google employees wrote their own letter protesting the project, some resigning in protest, demanding transparency at the very least.

In March, some Google employees found that changes were still being committed to the Dragonfly codebase. Google has yet to publicly commit to ending the project, leading many to believe the project could just be on the back burner for now.

How are people fighting back?

Relatively little news gets out of Xinjiang to the rest of the world, and China wants to keep it that way— journalists are denied visas, their relatives are detained, and journalists on the ground are arrested. Any work by groups that help shed light on the situation is extremely valuable. Earlier this year, we wrote about the amazing work by Humans Rights Watch, Amnesty International, other human rights groups, and other independent researchers and journalists in helping uncover the inner workings of China’s surveillance state.

Censorship transparency projects like WechatScope, WeiboScope, Tor’s OONI, and GreatFire's AppleCensorship, as well as ongoing censorship research by academic centers like The Citizen Lab and organizations like GreatFire continue to shed light on the methods and intentions of broader Chinese censorship efforts.

And of course, we have to take a look at the individuals and activists within and outside China who continue to fight to have their voices heard. Despite the continued rise of crackdowns on VPNs, VPN usage across Chinese web users continues to rise. In the first quarter of 2019, 35% of web users use VPNs, not just for accessing better music and TV shows, but also commonly for accessing blocked social networks, and blocked news sites.

Human rights groups, security researchers, investigators, journalists, and activists on the ground continue to make tremendous sacrifices in fighting for a more free China.

Categories: Privacy

EFF Tells Congress: Don’t Throw Out Good Patent Laws

Tue, 06/04/2019 - 15:42

At a Senate hearing today, EFF Staff Attorney Alex Moss gave formal testimony [PDF] about how to make sure our patent laws promote innovation, not abusive litigation.

Moss described how Section 101 of the U.S. patent laws serves a crucial role in protecting the public. She urged the Senate IP Subcommittee, which is considering radical changes to Section 101, to preserve the law to protect users, developers, and small businesses.

Since the Supreme Court’s decision in Alice v. CLS Bank, courts have been empowered to quickly dismiss lawsuits based on abstract patents. That has allowed many small businesses to fight back against meritless patent demands, which are often brought by "patent assertion entities," also known as patent trolls.

At EFF, we often hear from businesses or individuals that are being harassed or threatened by ridiculous patents. Moss told the Senate IP Subcommittee the story of Ruth Taylor, who was sued for infringement over a patent that claimed the idea of holding a contest with an audience voting for the winner but simply added generic computer language. The patent owner wanted Ruth to pay $50,000. Because of today’s Section 101, EFF was able to help Ruth pro bono, and ask the court to dismiss the case under Alice. The patent owner dropped the lawsuit days before the hearing.

We hope the Senate takes our testimony to heart and reconsiders the proposal by Senators Thom Tillis and Chris Coons, which would dismantle Section 101 as we know it. This would lead to a free-for-all for patent trolls, but huge costs and headaches for those who actually work in technology. 

We need your help. Contact your representatives in Congress today, and tell them to reject the Tillis-Coons patent proposal.



Categories: Privacy

Hearing Today: EFF Staff Attorney Alex Moss Will Testify About Proposed Changes to Patent Law That Will Benefit Trolls, Foster Bad Patents

Tue, 06/04/2019 - 09:41
Tillis-Coons Section 101 “Framework” Will Make Patent System Worse for Small Businesses, Consumers

Washington D.C.—EFF Staff Attorney Alex Moss will tell U.S. lawmakers today that proposed changes to Section 101 of the U.S. Patent Act—the section that defines, and limits, what can get a patent—will upend years of case law that ensures only true inventions, not basic practices or rudimentary ideas, should get a patent. Moss is among a panel of patent experts testifying today before the Senate Subcommittee on Intellectual Property about the state of patent eligibility in America.

The Supreme Court ruled in Alice v. CLS Bank that an abstract idea does not become eligible for a patent simply by being implemented on a generic computer. For example, a patent on the basic practice of letting people access content in exchange for watching an online ad was upheld in court before Alice. EFF’s “Saved by Alice” project has collected stories about small businesses that were helped, or even saved, by the Supreme Court’s Alice decision.

A proposal by Senators Thom Tillis and Chris Coons, chairman and ranking member of the subcommittee, would rewrite Section 101 of the Patent Act. The proposal is aimed squarely at killing the Alice decision. It will primarily benefit companies that aggressively license and litigate patents, as well as patent trolls—entities that produce no products, but make money by threatening developers and companies, often with vague software patents.

Section 101, as it stands, prevents monopolies on basic research tools that nobody could have invented. That protects developers, start-ups, and makers of all kinds, especially in software-based fields, Moss will tell senators.

Hearing before Senate Subcommittee on Intellectual Property: The State of Patent Eligibility in America, Part I

EFF Staff Attorney Alex Moss

Today at 2:30 pm

Dirksen Senate Office Building 226
50 Constitution Ave NE
Washington D.C. 20002

For more on Alice v. CLS Bank:

Contact:  AlexMossMark Cuban Chair to Eliminate Stupid Patents and Staff
Categories: Privacy

Caught in the Net: The Impact of ‘Extremist’ Speech Regulations on Human Rights Content

Mon, 06/03/2019 - 15:33
New Report from EFF, Syrian Archive, and WITNESS Examine Content Moderation and the Christchurch Call to Action

San Francisco – Social media companies have long struggled with what to do about extremist content that advocates for or celebrates terrorism and violence. But the dominant current approach, which features overbroad and vague policies and practices for removing content, is already decimating human rights content online, according to a new report from Electronic Frontier Foundation (EFF), Syrian Archive, and WITNESS. The report confirms that the reality of faulty content moderation must be addressed in ongoing efforts to address extremist content.

The pressure on platforms like Facebook, Twitter, and YouTube to moderate extremist content only increased after the mosque shootings in Christchurch, New Zealand earlier this year. In the wake of the Christchurch Call to Action Summit held last month, EFF teamed up with Syrian Archive and WITNESS to show how faulty moderation inadvertently captures and censors vital content, including activism, counter-speech, satire, and even evidence of war crimes.

“It’s hard to tell criticism of extremism from extremism itself when you are moderating thousands of pieces of content a day,” said EFF Director for International Freedom of Expression Jillian York. “Automated tools often make everything worse, since context is critical when making these decisions. Marginalized people speaking out on tricky political and human rights issues are too often the ones who are silenced.”

The examples cited in the report include a Facebook group advocating for the independence of the Chechen Republic of Iskeria that was mistakenly removed in its entirety for “terrorist activity or organized criminal activity.” Groups advocating for an independent Kurdistan are also often a target of overbroad content moderation, even though only one such group is considered a terrorist organization by governments. In another example of political content being wrongly censored, Facebook removed an image of a leader of Hezbollah with a rainbow Pride flag overlaid on it. The image was intended as satire, yet the mere fact that it included a face of a leader of Hezbollah led to its removal.

Social media is often used to as a vital lifeline to publicize on-the-ground political conflict and social unrest. In Syria, human rights defenders use this tactic as many as 50 times a day, and there are now more hours of social media content about the Syrian conflict than there have been hours in the conflict itself. Yet, YouTube used machine-learning-powered automated flagging to terminate thousands of Syrian YouTube channels that published videos of human rights violations, endangering the ability of those defenders to create a public record of those violations.

“In the frenzied rush to delete so-called extremist content, YouTube is erasing the history of the conflict in Syria almost as quickly as human rights defenders can hit ‘post,’” said Dia Kayyali, Program Manager for Tech and Accountability at Witness. “While ‘just taking it down’ might seem like a simple way to deal with extremist content online, we know current practices not only hurt freedom of expression and the right to access information, they are also harmful to real efforts to fight extremism.”

For the full report:

Contact:  Jillian C.YorkDirector for International Freedom of
Categories: Privacy

Research Shows Publishers Benefit Little From Tracking Ads

Mon, 06/03/2019 - 13:31

Advertising industry lobbyists have long argued that tracking users is necessary to power a publishing industry that makes its content available to users for “free”— despite a heavy privacy cost. Right now, a majority of publishers make money by working with advertisers that collect personal information about users as they move from site to site. Ad companies then combine that information with additional data bought from other sources, such as data brokers, to create detailed profiles they claim are necessary to tailor effective ads to an individual’s interests.

But new research, based on publisher data, has found that using this invasive tracking technique brings publishers just 4% more in revenue — or just $.00008 per ad — than ads based on context (for example, ads for sporting goods placed next to the sports scores).

New research, based on publisher data, has found that using this invasive tracking technique brings publishers just 4% more in revenue— or just $.00008 per ad—than ads based on context

This new report reinforces previous doubts about how much the tracking ecosystem actually benefits publishers. In 2016, Guardian staff bought ads in their own newspaper and found that as little as 30% of the amount spent reached the paper. Together, these studies show that while privacy-invasive behavioral advertising may enrich the adtech industry, it’s little help to publishing businesses scrambling to survive the digital transition. Publishers should rethink their involvement in practices which yield them minor gains but expose them to increased risks in terms of compliance and reputation which may undermine their business.

Advertisers, Not Publishers, Make Money from Ads that Track

Researchers Alessandro Acquisti, Veronica Marotta, and Vibhanshu Abhishek got access to a dataset from a large, unidentified media company that operates multiple websites with different scales and audiences.

Most online ads today are sold in “real time bidding” (RTB) auctions that take place in the microseconds between when you click on a link, and when the content and an ad appears on your screen. RTB is currently the subject of complaints and regulatory scrutiny in Europe. Over 90% of the transactions in this new study’s dataset relate to ads sold in these types of auctions, in which ads are sold as “impressions” associated with data to inform and attract potential bidders. The dataset included the viewer’s IP address, the URL of the page on which the ad was displayed, ad format, price received, and whether cookie information was available. As cookies remain the most popular, but not the only, method of tracking users, this cookie data allowed the researchers to separate ads sold based on context from those that included information about user behavior.

Earlier experiments by the same researchers found that advertisers pay up to 500% more for targeted ads than contextual ads. This price increase is heralded by adtech as evidence of their worth. But if the benefit to publishers is just 4%, what happens to the remaining surplus? It is siphoned off by the archipelago of intermediary adtech companies that, alongside Facebook and Google, operate the ad targeting infrastructure.

Publishers Should Put Their Readers First

Publishers should consider whether any small benefits to them from intrusive tracking of their readers’ behavior are offset by direct costs, such as the cost of compliance with laws on data protection. Publishers also should consider indirect costs. The invasion of their readers’ privacy undermines reader trust, which should especially be a concern to publications who are trying to raise voluntary contributions from their readers. Ironically, cross-site tracking can also demonetize their audience, by enabling advertisers to follow readers from a high-value web site and target them on low-end sites where ads are cheap — a process sometimes referred to as "data leakage" or "audience arbitrage."

Given this new paper — and how it challenges the argument that tracking is necessary to support the publishing business — publishers need to reset their association with adtech, and put their relationship with their readers first.

Categories: Privacy

The Impact of "Extremist" Speech Regulations on Human Rights Content

Mon, 06/03/2019 - 11:59

Today, EFF is publishing a new white paper, "Caught in the Net: The Impact of 'Extremist' Speech Regulations on Human Rights Content." The paper is a joint effort of EFF, Syrian Archive, and Witness and was written in response to the Christchurch Call to Action. This paper analyzes the impact of platform rules and content moderation practices related to "extremist" speech on human rights defenders.

The key audiences for this paper are governments and companies, particularly those that endorsed the Christchurch Call. As we wrote last month, the Call contains several important ideas, but also advocates for measures that would undoubtedly have a negative impact on freedom of expression online. The paper details several concrete instances in which content moderation has resulted in the removal of content under anti-extremism provisions, including speech advocating for Chechen and Kurdish self-determination; satirical speech about a key Hezbollah figure; and documentation of the ongoing conflicts in Syria, Yemen, and Ukraine. We also hope that our allies will find these examples useful for their ongoing advocacy.

As more governments move to impose regulatory measures that would require the removal of extremist speech or privatize enforcement of existing laws, it's imperative that they consider the potential for collateral damage that such censorship imposes, and consider more holistic measures to combat extremism.

Categories: Privacy

AT&T Sues California to Prevent Oversight Over IP Based 911 Calls Using State Law AT&T Supported and Wants Renewed

Thu, 05/30/2019 - 19:08

The California legislature in 2011 passed a law to remove state and local authority over the broadband access market to “ensure a vibrant and competitive open Internet that allows California's technology businesses to continue to flourish and contribute to economic development throughout the state.” Sounds good, right?

But that never happened. Instead, the broadband access market in California is heading into a high-speed monopoly that, for many, is more expensive and slower than many other markets. In fact, all the law does is protect broadband monopolies, and the major ISPs are working it hard to get it renewed through Assembly Member Lorena Gonzalez’s A.B. 1366.

Take Action

Tell Your Legislators Not to Extend Broadband Monopolies

and vote NO on a.b. 1366

Renewing the law rather than letting it expire carries significant consequences for California residents. This is because prohibiting the state’s authority over broadband access impacts everything that relies on broadband access. For example, as the Assembly signed off on renewing the law that has provided no tangible benefits to residents, AT&T was actively using it to block state oversight over the broadband-based 911 calling system known as Next Generation 911.

If AT&T wins, a critical emergency service would be built by a private corporation with no government authority to regulate it.

In a lawsuit AT&T filed against the Office of Emergency Services, it claims a broad immunity from state regulation by interpreting California law to mean that state and local governments have no power to oversee broadband services. In particular, the Office of Emergency Services requires all bidders with the state to build Next Generation 911 to submit to oversight by the California Public Utilities Commission (CPUC). AT&T's claim is that since the CPUC (or any state and local agency) has regulatory authority over broadband, they can't be forced to show things like how much they intend to charge the state or be subject to state audits to ensure they didn't overcharge taxpayers for building an emergency system. Should AT&T's argument carry the day, it would mean that a critical emergency service—literally a matter of life and death—would be built by a private corporation with no government authority to regulate it.  

AT&T Wants Taxpayer Money to Build a New 911 System with No State Oversight

The transition towards Next Generation 911 has been part of a decade-long effort to transition 911 calls to a system where all broadband-connected communication devices can make emergency calls. It will, for example, help emergency responders deal with call overload and have more accurate information about where callers are. This improves public safety by giving first responders better knowledge and information about emergencies, a goal laid out in bipartisan federal law introduced by Congresswoman Anna Eshoo (D-CA) and Congressman John Shimkus (R-IL) in 2008.

AT&T argues that the state law says no state agency can regulate their broadband products, and therefore the Office of Emergency Services cannot force companies seeking taxpayer money to submit themselves to state oversight. That means if a broadband-enabled 911 call doesn’t work, the state and local government effectively can’t penalize, audit, investigate, issue rules or do anything to remedy the problem, simply because it’s a broadband version of the product. That is plainly unreasonable, but also clearly the point of the law they pushed years ago.

The irony in the litigation is that AT&T is relying on a law that is set to expire in 7 months, but appears to have confidence that the legislature will hand them their renewed law. Given that these companies had few qualms about throttling firefighters in the middle of a state emergency—a situation that the state legislature is working to ban this year with Assembly Member Levine’s AB 1699—we hope that California’s legislators avoid subjecting Next Generation 911 emergency calling to such a risky litigation space. Otherwise, companies that dislike the rules they’re asked to follow can sue to strike them down. Unlike other services, where some failure may be acceptable, emergency systems have to be functional 100% of the time, and are fundamentally a government concern for public safety reasons.

It is possible that AT&T’s lawsuit will fail, and the company will not duck state regulation for broadband based 911 services. In fact, the Attorney General of California's response is persuasive as to why AT&T's lawsuit is without merit, and the declarations filed by the state indicate AT&T truly is being suspect in its decision to avoid government oversight. But we will have to wait and see what the courts decide. The cleanest solution would be to not renew the law, which would terminate AT&T's ability to file these kind of lawsuits and put an end to any future lawsuits of this nature.

Hopefully, this is a wake-up call to the legislature to illustrate how the laws they pass (or are about to renew) can be used by the industry that backs them. If companies are willing to claim oversight immunity from 911 calls, then there is nothing they wouldn’t use it against—and EFF believes they would regularly block efforts to promote competition. To stop this, Californians must contact their state Senators and ask them to vote NO on A.B. 1366.

Categories: Privacy

A Terrible Patent Bill is On the Way

Wed, 05/29/2019 - 15:45

Recently, we reported on the problems with a proposal from Senators Coons and Tillis to rewrite Section 101 of the Patent Act. Now, those senators have released a draft bill of changes they want to make. It’s not any better.

Section 101 prevents monopolies on basic research tools that nobody could have invented. That protects developers, start-ups, and makers of all kinds, especially in software-based fields. The proposal by Tillis and Coons will seriously weaken Section 101, leaving makers vulnerable to patent trolls, and other abusers of the patent system.

The draft legislation does remove a few aspects of the earlier proposal, but it has the exact same effect: it will erase more than a century of Section 101 case law—including the recent decision in Alice v. CLS Bank—and take away courts’ power to restore them.

The new draft bill relabels the existing law (subsection (a) below) and tacks a new subsection (b) after it.  The new part is in bold below:

 Section 101: (a) Whoever invents or discovers any useful process, machine, manufacture, or composition of matter, or any useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. (b) Eligibility under this section shall be determined only while considering the claimed invention as a whole, without discounting or disregarding any claim limitation.

Requiring eligibility to be determined based on “the claimed invention as a whole” is another way of saying: ignore the words the claim actually uses to describe the invention. The claim is the part of a patent that actually defines the “invention” that others are prevented from using. And it is the “claim as a whole” that’s considered the invention, not any particular element by itself.

But that doesn’t mean courts can’t consider the individual elements of a patent claim. In fact, it’s often critical that they do so. For example, the patent in Alice included a “data storage unit,” which the court considered “purely functional and generic,” and therefore rejected this element—because it didn’t have the “inventive concept” that Section 101 requires.

What Tillis and Coons are doing here essentially says: ignore the words the claim actually uses to describe the invention. This change will abrogate Alice and make it inapplicable in any future case.

That’s no accident. Alice has been so effective that patent trolls and other companies dependent on patent-licensing, rather than products, are pushing for Congress to undo what the Supreme Court has done.

Let's not let that happen. Protect basic research and stop patent abusers from tilting the system in their favor. E-mail your representatives in Congress and tell them to oppose the Tillis-Coons patent bill.



Categories: Privacy

EFF Receives $300,000 Donation from Craig Newmark Philanthropies to Support Threat Lab

Wed, 05/29/2019 - 13:53

Great news for EFF’s  Threat Lab: Craig Newmark Philanthropies has donated $300,000 to support its work to protect journalists, sources, and others against targeted malware attacks.

EFF identifies and tracks the rise of malware attacks, which primarily affect journalists and their sources globally. We have collaborated with groups like Citizen Lab and mobile security company Lookout to conduct these investigations, and the results of the research have helped the world understand this growing threat.

With the help of Craig Newmark Philanthropies, Threat Lab will continue to identify and track the complex web of groups who use malware against reporters and activists. Threat Lab will apply this information to educate the public and put pressure on the companies that build, sell, and license this spyware.

Threat Lab also creates and updates tools that EFF uses to educate and train journalists and others in digital security. Mobile devices, in particular, contain a wealth of data that could endanger reporters and their sources if certain security measures are not taken. This donation will help Threat Lab keep our Surveillance Self-Defense guide and our Security Education Companion accurate and up-to-date in the ever-changing digital landscape.

We are very grateful to Craig Newmark Philanthropies for this generous and important donation to EFF. Threat Lab’s work is key to protecting free and independent journalism, and we are proud to keep fighting for everyone’s digital security.

Categories: Privacy

Fines Aren’t Enough: Here’s How the FTC Can Make Facebook Better

Tue, 05/28/2019 - 17:53

The Federal Trade Commission is likely to announce that Facebook’s many violations of users’ privacy in recent years also violated its consent decree with the commission. In its financial filings, Facebook has indicated that it expects to be fined between $3 and $5 billion by the FTC. But punitive fines alone, no matter the size, are unlikely to change the overlapping privacy and competition harms at the center of Facebook’s business model. Whether or not it levies fines, the FTC should use its power to make Facebook better in meaningful ways. A new settlement with the company could compel it to change its behavior. We have some suggestions.

A $3 billion fine would be, by far, the largest privacy-related fine in the FTC’s history. The biggest to date was $22.5 million, levied against Google in 2012. But even after setting aside $3 billion to cover a potential fine, Facebook still managed to rake in $3.3 billion in profit during the first quarter of 2019. It’s rumored that Facebook will agree to create a “privacy committee” as part of this settlement. But the company needs to change its actions, not just its org chart. That’s why the settlement the FTC is negotiating now also needs to include limits on Facebook’s behavior.

Stop Third-Party Tracking

Facebook uses “Like” buttons, invisible Pixel conversion trackers, and ad code in mobile apps to track its users nearly any time they use the Internet—even when they’re off Facebook products. This program allows Facebook to build nauseatingly detailed profiles of users’—and non-users’—personal activity. Facebook’s unique ability to match third-party website activity to real-world identities also gives it a competitive advantage in both the social media and third-party ad markets. The FTC should order Facebook to stop linking data it collects outside of Facebook with user profiles inside the social network.

Don’t Merge WhatsApp, Instagram, and Facebook Data

Facebook has announced plans to build a unified chat platform so that users can send messages between WhatsApp, Messenger, and Instagram accounts seamlessly. Letting users of different services talk to each other is reasonable, and Facebook’s commitment to end-to-end encryption for the unified service is great (if it’s for real). But in order to link the services together, Facebook will likely need to merge account data from its disparate properties. This may help Facebook enrich its user profiles for ad targeting and make it harder for users to fully extricate their data from the Facebook empire should they decide to leave. Furthermore, there’s a risk that people with one set of expectations for a service like Instagram, which allows pseudonyms and does not require a phone number, will be blindsided when Facebook links their accounts to real identities. This could expose sensitive information about vulnerable people to friends, family, ex-partners, or law enforcement. In short, there are dozens of ways the great messenger union could go wrong.

Facebook promises that messaging “interoperability” will be opt-in. But corporations are fickle, and Facebook and other tech giants have repeatedly walked back privacy commitments they’ve made in the past. The FTC should make sure Facebook stays true to its word by ordering it not to merge user data from its different properties without express opt-in consent. Furthermore, if users do decide to opt-in to merging their Instagram or WhatsApp accounts with Facebook data, the FTC should make sure they reserve the right to opt back out.

Stop Data Broker-Powered Ad Targeting

Last March, Facebook shut down its “Partner Categories” program, in which it purchased data from data brokers like Acxiom and Oracle in order to boost its own ad-targeting system. But over a year later, advertisers are still using data broker-provided information to target users on Facebook, and both Facebook and data brokers are still raking in profit. That’s because Facebook allows data brokers to upload “custom audience data files”—lists of contact information, drawn from the brokers’ vast tranches of personal data—where they can charge advertisers to access those lists. As a result, though the interface has changed, data broker-powered targeting on Facebook is alive and well.

Data brokers are some of the shadiest actors in the digital marketplace. They make money by buying and selling detailed information about billions of people. And most of the people they profile don’t know they exist. The FTC should order Facebook to stop allowing data brokers to upload and share custom audiences with advertisers, and to explicitly disallow advertisers from using data broker-provided information on Facebook. This will make Facebook a safer, less creepy place for users, and it will put a serious dent in the dirty business of buying and selling private information.

A Good Start, But Not the End

We can’t fix all of the problems with Facebook in one fell swoop.  Facebook’s content moderation policies need serious work. The platform should be more interoperable and more open. We need to remove barriers to competition so that more privacy-respecting social networks can emerge. And users around the world deserve to have baseline privacy protections enshrined in law. But the FTC has a rare opportunity to tell one of the most powerful companies in the world how to make its business more privacy-protective and less exploitative for everyone. These changes would be a serious step in the right direction.

Categories: Privacy

EFF Asks San Bernardino Court to Review Cell-Site Simulator and Digital Search Warrants That Are Likely Improperly Sealed

Tue, 05/28/2019 - 11:26

Since the California legislature passed a 2015 law requiring cops to get a search warrant before probing our devices, rifling through our online accounts, or tracking our phones, EFF has been on a quest to examine court filings to determine whether law enforcement agencies are following the new rules. We have been especially concerned that cops and the courts have been disregarding the transparency measures baked into the California Electronic Communications Privacy Act (CalECPA).

As it turns out, our suspicions were well warranted. A lawsuit we filed last year against the San Bernardino County Sheriff’s Office has turned up evidence that potentially hundreds of digital search warrants have been improperly and indefinitely sealed, blocking the public’s right to inspect court records.

EFF, represented by the Law Office of Michael T. Risher, has filed a formal request with the Presiding Judge of the San Bernardino County Superior Court to review and unseal 22 search warrants that appear to be sealed in violation California’s penal code. We are also asking that the court “take whatever steps are necessary to ensure that similar files—both in the past and in the future—are open to the public as required by law.”

Read EFF’s letter to the San Bernardino County Superior Court Judge John P. Vander Feer.

When CalECPA was passed, it was hailed as the “Nation’s Best Digital Privacy Law” by outlets such as Wired, because it prevents the government from forcing companies to hand over electronic communications, files, or metadata, without first obtaining a warrant. It similarly requires the government to obtain a warrant before searching our devices or tracking our location through our devices. This includes the use of cell-site simulators, a surveillance technology that masquerades as a fake cell phone tower to connect to a target’s phone. The law also included several accountability measures, such as requiring agencies to file public disclosures with the California Department of Justice, which EFF uses to identify search warrants across the state that deserve greater scrutiny deserving of great scrutiny.

Last year, EFF picked out six suspicious warrants filed by the San Bernardino Sheriff for a deeper dive, since they all referred to the use of a “cell-site stimulator” (a misspelling guaranteed to make privacy advocates snicker). Those were the only warrants to directly make reference to the technology, even though the sheriff had separately disclosed to EFF that it had used a cell-site simulator 231 times in 2017 alone. The sheriff refused to turn over these warrants, and so EFF took the agency to court. We subsequently filed requests for 18 other CalECPA warrants, including searches of devices and accounts and phone surveillance techniques known as a pen register or a trap and trace. Again, San Bernardino County officials resisted handing over the records.

In many cases, San Bernardino County claimed the records could not be released since they had been indefinitely sealed by the court. San Bernardino only provided copies of two search warrant applications, which include sealing requests that were rejected by a judge. Based on these documents, we advise the court that it appears “the Sheriff’s Department requests indefinite sealing orders as part of every application for a warrant or court order under these statutes.”

The problem is that this isn’t how the system is supposed to work.

In 2016, the legislature changed state law to require that when an order for a pen register or trap and trace expires, so does any sealing order. Similarly, CalECPA requires that after a search warrant has been executed and “returned” to the court, the records can only be held secret for 10 days. After that the search warrants must be open to the public.

The passage of CalECPA represented a fundamental breakthrough for civil liberties in the digital age. But a law is only as good as its enforcement. If the San Bernardino courts and sheriff keep these records secret, then not only does that violate the will of the people of California, but it blocks the ability of the public to ensure that other elements of the law are also being obeyed.

San Bernardino may just be the tip of the iceberg. We hope courts in other jurisdictions take notice and also examine their CalECPA warrants to ensure the law is being followed.

Related Cases: California's Electronic Communications Privacy Act (CalECPA) - SB 178
Categories: Privacy

If Regulators Won’t Stop The Sale of Cell Phone Users’ Location Data, Consumers Must

Tue, 05/28/2019 - 08:07

A Motherboard investigation revealed in January how any cellphone users’ real-time location could be obtained for $300. The pervasiveness of the practice, coupled with the extreme invasion of people’s privacy, is alarming.

The reporting showed there is a vibrant market for location data generated by everyone’s cell phones—information that can be incredibly detailed and provide a window into people’s most sensitive and private activities. The investigation also laid bare that cell phone carriers AT&T, Sprint, and T-Mobile, and the many third parties with access to the companies’ location data, have little interest or incentive to stop.

This market of your personal information violates federal law and Federal Communication Commission (FCC) rules that protect people’s location privacy. The market also violates FCC rules prohibiting disclosure of extremely sensitive location information derived in part from GPS data that is only to be disclosed when emergency responders need to find people during an emergency.

We expected the FCC to take immediate action to shut down the unlawful location data market and to punish the bad actors.

But many months later, the FCC has not taken any public action. It’s a bad sign when minority FCC commissioners have to take to the pages of the New York Times to call for an end to the practices, or must send their own letters to carriers to get basic information about the problem. Although some members of Congress have investigated and demanded an end to the practice, no solution is in sight.

Earlier this year, the major cell phone providers promised that they have ended or will end the practices. Those promises ring hollow after they promised to end sale of the same location data in 2018.

In light of this inaction, consumers must step up to make sure that their location data is no longer so easily sold and that laws are enforced to prohibit it from happening again.

Although much of the reporting has focused on bounty hunters’ ability to obtain anyone’s location information, documents created by the companies that accessed and sold the data show it was used for many other purposes. This includes marketing materials for car dealerships to buy real-time location data of potential buyers, and for landlords to find the locations of their potential tenants. Even more troubling, stalkers and bounty hunters appeared to be able to impersonate law enforcement officials and use the system to find people, including victims of domestic violence.

EFF wants to stop this illegal violation of the location privacy of millions of phone users. So please tell us your stories.

If you believe these companies unlawfully shared your cell phone location information, please let us know. In particular, it would be helpful if you could tell us:

  • Who obtained your cell phone location information?
  • How did they get it?
  • How did they use it?
  • When and where did this happen?
  • What cell phone provider were you using?
  • How do you know this?
  • Do you have any documents or other evidence that shows this?

Please write to us at

Categories: Privacy

The Government’s Indictment of Julian Assange Poses a Clear and Present Danger to Journalism, the Freedom of the Press, and Freedom of Speech

Fri, 05/24/2019 - 14:33

The century-old tradition that the Espionage Act not be used against journalistic activities has now been broken. Seventeen new charges were filed yesterday against Wikileaks founder Julian Assange. These new charges make clear that he is being prosecuted for basic journalistic tasks, including being openly available to receive leaked information, expressing interest in publishing information regarding certain otherwise secret operations of government, and then disseminating newsworthy information to the public. The government has now dropped the charade that this prosecution is only about hacking or helping in hacking. Regardless of whether Assange himself is labeled a “journalist,” the indictment targets routine journalistic practices.

But the indictment is also a challenge to fundamental principles of freedom of speech. As the Supreme Court has explained, every person has the right to disseminate truthful information pertaining to matters of public interest, even if that information was obtained by someone else illegally. The indictment purports to evade this protection by repeatedly alleging that Assange simply “encouraged” his sources to provide information to him. This places a fundamental free speech right on uncertain and ambiguous footing.

A Threat To The Free Press

Make no mistake, this not just about Assange or Wikileaks—this is a threat to all journalism, and the public interest. The press stands in place of the public in holding the government accountable, and the Assange charges threaten that critical role. The charges threaten reporters who communicate with and knowingly obtain information of public interest from sources and whistleblowers, or publish that information, by sending a clear signal that they can be charged with spying simply for doing their jobs. And they threaten everyone seeking to educate the public about the operation of government and expose government wrongdoing, whether or not they are professional journalists.

Assistant Attorney General John Demers, head of the Department of Justice’s National Security Division, told reporters after the indictment that the department “takes seriously the role of journalists in our democracy and we thank you for it,” and that it’s not the government’s policy to target them for reporting. But it’s difficult to separate the Assange indictment from President Trump’s repeated attacks on the press, including his declarations on Twitter, at White House briefings, and in interviews that the press is “the enemy of the people,” “dishonest,” “out of control,” and “fake news.” Demers’ statement was very narrow—disavowing the “targeting” of journalists, but not the prosecution of them as part of targeting their sources. And contrary to the DOJ’s public statements, the actual text of the Assange Indictment sets a dangerous precedent; by the same reasoning it asserts here, the administration could turn its fervent anti-press sentiments into charges against any other media organization it disfavors for engaging in routine journalistic practices.

Most dangerously, the indictment contends that anyone who “counsels, commands, induces” (under 18 USC §2, for aiding and abetting) a source to obtain or attempt to obtain classified information violates the Espionage Act, 18 USC § 793(b). Under the language of the statute, this includes literally “anything connected with the national defense,” so long as there is an  “intent or reason to believe that the information is to be used to the injury of the United States, or to the advantage of any foreign nation.” The indictment relies heavily and repeatedly on allegations that Assange “encouraged” his sources to leak documents to Wikileaks, even though he knew that the documents contained national security information.

But encouraging sources and knowingly receiving documents containing classified information are standard journalistic practices, especially among national security reporters. Neither law nor custom has ever required a journalist to be a purely passive, unexpected, or unknowing recipient of a leaked document. And the U.S. government has regularly maintained, in EFF’s own cases and elsewhere, that virtually any release of classified information injures the United States and advantages foreign nations.

The DOJ indictment thus raises questions about what specific acts of “encouragement” the department believes cross the bright line between First Amendment protected newsgathering and crime. If a journalist, like then-candidate Trump, had said: "Russia, if you’re listening, I hope you’re able to find the [classified] emails that are missing. I think you will probably be rewarded mightily by our press," would that be a chargeable crime?

The DOJ Does Not Decide What Is And Isn’t Journalism

Demers said Assange was “no journalist,” perhaps to justify the DOJ’s decision to charge Assange and show that it is not targeting the press. But it is not the DOJ’s role to determine who is or is not a “journalist,” and courts have consistently found that what makes something journalism is the function of the work, not the character of the person. As the Second Circuit once wrote in a case about the reporters’ privilege, the question is whether they intended to “use material—sought, gathered, or received—to disseminate information to the public.”  No government label or approval is necessary, nor is any job title or formal affiliation. Rather than justifying the indictment, Demers’ non-sequitur appears aimed at distracting from the reality of it.

Moreover, Demers’ statement is as dangerous as it is irrelevant. None of the elements of the 18 statutory charges (Assange is also facing a charge under the Computer Fraud and Abuse Act) require a determination that Assange is not a journalist. Instead, the charges broadly describe journalism–seeking, gathering and receiving information for dissemination to the public, and then publishing that information–as unlawful espionage when it involves classified information.

Of course news organizations routinely publish classified information. This is not considered unusual, nor (previously) illegal. When the government went to the Supreme Court to stop the publication of the classified Pentagon Papers, the Supreme Court refused (though it did not reach the question of whether the Espionage Act could constitutionally be charged against the publishers). Justice Hugo Black, concurring in the judgment, explained why:

In the First Amendment, the Founding Fathers gave the free press the protection it must have to fulfill its essential role in our democracy. The press was to serve the governed, not the governors. The Government's power to censor the press was abolished so that the press would remain forever free to censure the Government. The press was protected so that it could bare the secrets of government and inform the people. Only a free and unrestrained press can effectively expose deception in government. And paramount among the responsibilities of a free press is the duty to prevent any part of the government from deceiving the people and sending them off to distant lands to die of foreign fevers and foreign shot and shell.

Despite this precedent and American tradition, three of the DOJ charges against Assange specifically focus solely on the purported crime of publication. These three charges are for Wikileaks’ publication of the State Department cables and the Significant Activity Reports (war logs) for Iraq and Afghanistan, documents which were also published in Der SpiegelThe GuardianThe New York TimesAl Jazeera, and Le Monde, and republished by many other news media.

For these charges, the government included allegations that Assange failed to properly redact, and thereby endangered sources. This may be another attempt to make a distinction between Wikileaks and other publishers, and perhaps to tarnish Assange along the way. Yet this is not a distinction that makes a difference, as sometimes the media may need to provide unredacted data. For example, in 2017 the New York Times published the name of a CIA official who was behind the CIA program to use drones to kill high-ranking militants, explaining “that the American public has a right to know who is making life-or-death decisions in its name.”

While one can certainly criticize the press’ publication of sensitive data, including identities of sources or covert officials, especially if that leads to harm, this does not mean the government must have the power to decide what can be published, or to criminalize publication that does not first get the approval of a government censor. The Supreme Court has justly held the government to a very high standard for abridging the ability of the press to publish, limited to exceptional circumstances like “publication of the sailing dates of transports or the number and location of troops” during wartime.

A Threat to Free Speech

In a broader context, the indictment challenges a fundamental principle of free speech: that a person has a strong First Amendment right to disseminate truthful information pertaining to matters of public interest, including in situations in which the person’s source obtained the information illegally. In Bartnicki v. Vopper, the Supreme Court affirmed this, explaining: “it would be quite remarkable to hold that speech by a law-abiding possessor of information can be suppressed in order to deter conduct by a non-law-abiding third party. ... [A] stranger's illegal conduct does not suffice to remove the First Amendment shield from speech about a matter of public concern.”

While Bartnicki involved an unknown source who anonymously left an illegal recording with Bartnicki, later courts have acknowledged that the rule applies, and perhaps even more strongly, to recipients who knowingly and willfully received material from sources, even when they know the source obtained it illegally. In one such case, the court rejected a claim that the willing acceptance of such material could sustain a charge of conspiracy between the publisher and her source.

Regardless of what one thinks of Assange’s personal behavior, the indictment itself will inevitably have a chilling effect on critical national security journalism, and the dissemination in the public interest of available information that the government would prefer to hide. There can be no doubt now that the Assange indictment is an attack on the freedoms of speech and the press, and it must not stand.

Related Cases: Bank Julius Baer & Co v. Wikileaks
Categories: Privacy

Rep. Thompson Works to Secure Our Elections

Thu, 05/23/2019 - 20:12

Foreign adversaries and domestic dirty tricksters can secretly hack our nation’s electronic voting systems. That’s why information security experts agree we must go back to basics: paper ballots. We also need “risk-limiting audits,” meaning mandatory post-election review of a sample of the paper ballots, to ensure the election-night “official” results are correct. EFF has long sought these reforms.

A new federal bill is a step in the right direction: H.R. 2660, the Election Security Act. It was introduced on May 10 by Rep. Bennie Thompson, Chair of the House Homeland Security Committee; Rep. Zoe Lofgren, Chair of the House Administration Committee; and Rep. John Sarbanes, Chair of the Democracy Reform Task Force.

This bill would help secure our democracy from digital attack in the following ways:

  • It requires paper ballots in all federal elections. In any post-election dispute, these paper ballots are the official expression of voter intent. Each voter may choose whether to mark their paper ballot by hand, or by using a device that marks paper ballots in a manner that is easy for the voter to read.
  • It authorizes expenditure of $1 billion in the coming year to pay for the transition to paper ballot voting systems, and an additional $700 million over the next six years.
  • It authorizes expenditure of $20 million to support risk-limiting audits.
  • It authorizes expenditure of $180 million over nine years to improve the security of election infrastructure.
  • It authorizes $5 million for research to make voting systems more accessible for people with disabilities.
  • It requires the creation of cybersecurity standards for voting systems
  • It creates a “bug bounty” program for voting systems.

This is a good start. The bill would be even stronger if it adopted key parts of another election security bill, introduced last week by Sen. Wyden with 14 Senate co-sponsors. As EFF recently wrote, that bill would not only require paper ballots and help pay for paper ballots and tabulation machines, it would also require risk-limiting audits, ban the connection of voting machines to the internet, and empower ordinary voters with a private right of action to enforce new election security rules.

Congress must act now to secure our voting systems, before the next federal elections. We thank both Rep. Thompson and Sen. Wyden for leading the way.

Categories: Privacy