SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The retail giant was also ordered to pay more than $30 million last year after allegedly surveilling customers with its tech products.
Months after Amazon was fined more than $30 million for allegedly spying on customers in their homes, a French data watchdog on Monday announced it had ordered the retail giant to pay another $35 million for what it called "excessive" tracking of warehouse employees' activity.
France's National Commission on Informatics and Liberty (CNIL) informed Amazon France Logistique, which runs the U.S. company's warehouses in the country, of the fine late last month after investigating scanning devices used by employees.
Several features of the tools violate the European Union's General Data Protection Regulation (GDPR), according to the group.
The technology-focused news outlet The Registerreported that all employees at Amazon's French warehouses are given scanners that document their tasks, including when they pick up an item or place it in a delivery box.
CNIL found that the "inactivity indicators" on the scanners were "too precise" and could lead "to the employee potentially having to justify each break or interruption."
Another feature used to measure the speed at which the scanner is used and one that stored data history for 31 days were also deemed "excessive" by the watchdog.
CNIL's investigation found that before April 2020, temporary employees at the warehouses weren't informed that their data would be collected by the scanning devices and that no workers were sufficiently told that the facilities were equipped with video surveillance systems.
In violation of Article 32 of the GDPR, said the watchdog, "access to the video surveillance software was not sufficiently secure, since the password was not sufficiently robust and the access account was shared between several users."
The group said it determined the amount of Amazon's penalty by taking into account "the fact that the processing of employees' data by means of scanners differed from the methods of monitoring of traditional activity because of the scale at which they were implemented, both in terms of their completeness and permanence, and led to a very tight and detailed monitoring of the work of employees."
The EUobserver, which reports on democracy within the bloc, noted that the fine was announced on the same day that Amazon refused to participate in a European Parliament hearing on working conditions in its warehouses.
The fine comes less than a year after the U.S. Federal Trade Commission (FTC) determined that an Amazon employee had used its Ring security cameras to spy on female customers for several months, prompting the company to agree to a settlement worth $5.8 million.
Amazon also agreed to a $25 million settlement after being accused to failing to delete audio when parents requested they be erased from Alexa speakers.
The company said Tuesday that it "might appeal" the CNIL's decision and that the watchdog's conclusions about its surveillance practices were "factually incorrect."
In the U.S., progressive law professor Zephyr Teachout called the fine "excellent" and expressed hope that policymakers will soon pass "clear American laws that recognize just how harmful extreme monitoring is."
"Contract law is not the key," said Teachout. "Basic dignity is."
Amazon's focus on closely monitoring employees' activities has led to numerous injuries among workers, according to a survey by the University of Illinois Chicago's Center for Urban Economic Development last October. The center found that out of 1,484 employees, 70% had been forced to take unpaid time off due to sprains, strains, and other injuries sustained while rushing to keep up with Amazon's demanding quotas.
"We see clear evidence in our data," said researchers, "that work intensity and monitoring contribute to negative health outcomes."
Artificial intelligence could supercharge threats to civil liberties, civil rights, and privacy.
Your friends aren’t the only ones seeing your tweets on social media. The F.B.I and the Department of Homeland Security (DHS), as well as police departments around the country, are reviewing and analyzing people’s online activity. These programs are only likely to grow as generative artificial intelligence (AI) promises to re-make our online world with better, faster, and more accurate analyses of data, as well as the ability to generate humanlike text, video, and audio.
While social media can help law enforcement investigate crimes, many of these monitoring efforts reach far more broadly even before bringing AI into the mix. Programs aimed at “situational awareness,” like those run by many parts of DHS or police departments preparing for public events, tend to have few safeguards. They often veer into monitoring social and political movements, particularly those involving minority communities. For instance, DHS’s National Operations Center issued multiple bulletins on the 2020 racial justice protests. The Boston Police Department tracked posts by Black Lives Matter protesters and labeled online speech related to Muslim religious and cultural practices as “extremist” without any evidence of violence or terrorism. Nor does law enforcement limit itself to scanning public posts. The Memphis police, for example, created a fake Facebook profile to befriend and gather information from Black Lives Matter activists.
The pervasiveness — and problems — of social media surveillance are almost certain to be exacerbated by new AI tools...
Internal government assessments cast serious doubt on the usefulness of broad social media monitoring. In 2021, after extensive reports of the department’s overreach in monitoring racial justice protestors, the DHS General Counsel’s office reviewed the activities of agents collecting social media and other open-source information to try to identify emerging threats. It found that agents gathered material on “a broad range of general threats,” ultimately yielding “information of limited value.” The Biden administration ordered a review of the Trump-era policy requiring nearly all visa applicants to submit their social media handles to the State Department, affecting some 15 million people annually, to help in immigration vetting — a practice that the Brennan Center has sought to challenge. While the review’s results have not been made public, intelligence officials charged with conducting it concluded that collecting social media handles added “no value” to the screening process. This is consistent with earlier findings. According to a 2016 brief prepared by the Department of Homeland Security for the incoming administration, in similar programs to vet refugees, account information “did not yield clear, articulable links to national security concerns, even for those applicants who were found to pose a potential national security threat based on other security screening results.” The following year, the DHS Inspector General released an audit of these programs, finding that the department had not measured their effectiveness and rendered them an insufficient basis for future initiatives. Despite failing to prove that monitoring programs actually bolster national security, the government continues to collect, use, and retain social media data.
The pervasiveness — and problems — of social media surveillance are almost certain to be exacerbated by new AI tools, including generative models, which agencies are racing to adopt.
Generative AI will enable law enforcement to more easily use covert accounts. In the physical world, undercover informants have long raised issues, especially when they have been used to trawl communities rather than target specific criminal activities. Online undercover accounts are far easier and cheaper to create and can be used to trick people into interacting and inadvertently sharing personal information such as the name of their friends and associations. New AI tools could generate fake accounts with a sufficient range of interests and connections to look real and autonomously interact with people online, saving officer time and effort. This will supercharge the problem of effortless surveillance, which the Supreme Court has recognized may “alter the relationship between citizen and government in a way that is inimical to democratic society.” These concerns are compounded by the fact that few police departments impose restrictions on undercover account use, with many allowing officers to monitor people online without a clear rationale, documentation or supervision. The same is true for federal agencies such as DHS.
Currently, despite the hype generated by their purveyors, social media surveillance tools seem to operate on a relatively rudimentary basis. While the companies that sell them tend to be secretive about how they work, the Brennan Center’s research suggest serious shortcomings. Some popular tools do not use scientific methods for identifying relevant datasets, much less test them for bias. They often use key words and phrases to identify potential threats, which blurs the context necessary to understand whether something is in fact a threat and not, for example, someone discussing a video game. It is possible that large language models, such as ChatGPT, will advance this capability — or at least be perceived and sold as doing so — and incentivize greater use of these tools.
At the same time, any such improvements may be offset by the fact that AI is widely expected to further pollute the unreliable information environment, exacerbating problems of provenance and reliability. Social media is already suffused with inaccurate and misleading information. According to a 2018 MIT study, false political news is 70 percent more likely to be re-tweeted than truthful content on X (formerly Twitter). Bots and fake accounts — which can already mimic human behavior — are also a challenge; during the COVID-19 pandemic, bots were found to proliferate misinformation about the disease, and could just as easily spread fake information generated by AI, deceiving platform users. Generative AI makes creating false news and fake identities easier, negatively contributing to an already-polluted online information environment. Moreover, AI has a tendency to “hallucinate,” or make up information — a seemingly unfixable problem that is ubiquitous among generative AI systems.
Generative AI also exacerbates longstanding problems. The promise of better analysis does nothing to ease First Amendment issues raised by social media monitoring. Bias in algorithmic tools has long been a concern, ranging from predictive policing programs that treat Black people as suspect to content moderation practices disfavoring Muslim speech. For example, Instagram users recently found that the label terrorist was addedto their English bios if their Arabic bios included the word “Palestinian,” the Palestinian flag emoji, and the common Arabic phrase “praise be to god.”
The need to address these risks is front and center in President Biden’s AI executive order and a draft memorandum from the Office of Management and Budget that sets out standards for federal agency use of AI. The OMB memo identifies social media monitoring as a use of AI that impacts individuals’ rights, and thus requires agencies using this technology to follow critical rules for transparency, testing efficacy and mitigating bias and other risks. Unfortunately, these sensible rules do not apply to national security and intelligence uses and do not affect police departments. But they should.
"Journalists must be able to freely report on government actions without fear the government will compel them to reveal their sources," said one campaigner.
Privacy and First Amendment advocates on Wednesday urged the U.S. House to pass legislation that would protect the United States' bedrock freedoms and a core tenet of journalism: the right of reporters to guard the identities of their sources.
The House Judiciary Committee advanced the Protect Reporters from Exploitative State Spying (PRESS) Act with bipartisan support, despite claims in recent months by Republican lawmakers such as Sen. Tom Cotton (R-Ark.) that the legislation would "immunize journalists and leakers alike from scrutiny and consequences for their actions."
The bill has been recognized by press freedom advocates as the most important piece of legislation in modern times regarding journalists' rights, as it would codify state protections at the federal level.
Forty-nine states already protect reporters from being compelled to reveal their confidential sources and federal abuse of subpoena power, and the PRESS Act would ensure all journalists have those protections regardless of where in the country they live and work.
"Journalists must be able to freely report on government actions without fear the government will compel them to reveal their sources. We commend the House Judiciary Committee for its bipartisan support of the PRESS Act," said Daniel Schuman, policy director at Demand Progress. "The Senate must act now to advance this important legislation."
The House previously advanced the bill with a voice vote last September, garnering support from all the Republicans in the chamber. Schuman pointed out late last year, as Cotton blocked the passage of the bill in the Senate, that the lower chamber included a number of exceptions in the law to satisfy the House GOP.
The bill includes exceptions for cases pertaining to information necessary to identify people accused of terrorist acts or involving the risk of imminent bodily harm or death, crimes unrelated to journalism, slander, libel, and defamation.
"The PRESS Act creates critical protections for the fearless journalists who act as government watchdogs and keep all of us informed," said Jenna Leventoff, senior policy counsel at the ACLU, which has long advocated for the bill. "While the majority of states already have shield laws in place that protect journalists from compelled disclosure of their sources, the PRESS Act provides uniform protections to journalists all across the country. We thank the House Judiciary Committee for protecting our constitutional right to a free press and urge the full House to swiftly pass this bipartisan legislation."
Although the U.S. Department of Justice adopted a policy in 2021 restricting subpoenas and seizures of journalists' technological devices and data, Gabriela Schneider noted at First Branch Forecast, Demand Progress Education Fund's newsletter, that the measure "could just as easily be suspended, ignored, or secretly altered."
"Importantly," Schneider wrote, "the PRESS Act would codify into law this prohibition, making it real and permanent."