Shame On Marten
Oct. 1st, 2025 09:38 pm![[syndicated profile]](https://www.dreamwidth.org/img/silk/identity/feed.png)
he's very trusting
The Financial Times reports that the U.K. is once again demanding that Apple create a backdoor into its encrypted backup services. The only change since the last time they demanded this is that the order is allegedly limited to only apply to British users. That doesn’t make it any better.
The demand uses a power called a “Technical Capability Notice” (TCN) in the U.K.’s Investigatory Powers Act. At the time of its signing we noted this law would likely be used to demand Apple spy on its users.
After the U.K. government first issued the TCN in January, Apple was forced to either create a backdoor or block its Advanced Data Protection feature—which turns on end-to-end encryption for iCloud—for all U.K. users. The company decided to remove the feature in the U.K. instead of creating the backdoor.
The initial order from January targeted the data of all Apple users. In August, the US claimed the U.K. withdrew the demand, but Apple did not re-enable Advanced Data Protection. The new order provides insight into why: the U.K. was just rewriting it to only apply to British users.
This is still an unsettling overreach that makes U.K. users less safe and less free. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud. It sets a dangerous precedent to demand similar data from other companies, and provides a runway for other authoritarian governments to issue comparable orders. The news of continued server-side access to users' data comes just days after the UK government announced an intrusive mandatory digital ID scheme, framed as a measure against illegal migration.
A tribunal hearing was initially set to take place in January 2026, though it’s currently unclear if that will proceed or if the new order changes the legal process. Apple must continue to refuse these types of backdoors. Breaking end-to-end encryption for one country breaks it for everyone. These repeated attempts to weaken encryption violates fundamental human rights and destroys our right to private spaces.
It's spooky season—but while jump scares may get your heart racing, catching up on digital rights news shouldn't! Our EFFector newsletter has got you covered with easy, bite-sized updates to keep you up-to-date.
In this issue, we spotlight new ALPR-enhanced police drones and how local communities can push back; unpack the ongoing TikTok “ban,” which we’ve consistently said violates the First Amendment; and celebrate a privacy win—abandoning a phone doesn't mean you've also abandoned your privacy rights.
Prefer to listen in? Check out our audio companion, where we interview EFF Staff Attorney Lisa Femia who explains the findings from our investigation into abortion censorship on social media. Catch the conversation on YouTube or the Internet Archive.
EFFECTOR 37.13 - ❌ HOW META IS CENSORING ABORTION
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
Federal employees play a key role in safeguarding the civil liberties of millions of Americans. Our rights to privacy and free expression can only survive when we stand together to push back against overreach and ensure that technology serves all people—not just the powerful.
That’s why EFF jumped to action earlier this year, when the U.S. Office of Personnel Management (OPM) handed over sensitive employee data—Social Security numbers, benefits data, work histories, and more—to Elon Musk’s Department of Government Efficiency (DOGE). This was a blatant violation of the Privacy Act of 1974, and it put federal workers directly at risk.
We didn’t let it stand. Alongside federal employee unions, EFF sued OPM and DOGE in February. In June, we secured a victory when a judge ruled we were entitled to a preliminary injunction and ordered OPM to provide accounting of DOGE access to employee records. Your support makes this possible.
Now the fight continues—and your support matters more than ever. The Office of Personnel Management is planting the seeds to undermine and potentially remove the Combined Federal Campaign (CFC), the main program federal employees and retirees have long used to support charities—including EFF. For now, you can still give to EFF through the CFC this year (use our ID: 10437) and we’d appreciate your support! But with the program’s uncertain future, direct support is the best way to keep our work going strong for years to come.
SUPPORT EFF'S WORK DIRECTLY, BECOME A MEMBER!
When you donate directly, you join a movement of lawyers, activists, and technologists who defend privacy, call out censorship, and push back against abuses of power—everywhere from the courts to Congress and to the streets. As a member, you’ll also receive insider updates, invitations to exclusive events, and receive conversation-starting EFF gear.
Plus, you can sustain our mission long-term with a monthly or annual donation!
Stand with EFF. Protect privacy. Defend free expression. Support our work today.
Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.
Online privacy isn’t dead. But the tech giants make it a pain in the butt to achieve. With these incremental tweaks to the services we use, we can throw sand in the gears of the surveillance machine and opt out of the ways tech companies attempt to optimize us into advertisement and content viewing machines. We’re also pushing companies to make more privacy-protective defaults the norm, but until that happens, the onus is on all of us to dig into the settings.
All month long we’ll share tips, including some with the help from our friends at Consumer Reports’ Security Planner tool. Use the Table of Contents here to jump straight to any tip.
Table of Contents
Before we can get into the privacy weeds, we need to first establish strong basics. Namely, two security fundamentals: using strong passwords (a password manager helps simplify this) and two-factor authentication for your online accounts. Together, they can significantly improve your online privacy by making it much harder for your data to fall into the hands of a stranger.
Using unique passwords for every web login means that if your account information ends up in a data breach, it won’t give bad actors an easy way to unlock your other accounts. Since it’s impossible for all of us to remember a unique password for every login we have, most people will want to use a password manager, which generates and stores those passwords for you.
Two-factor authentication is the second lock on those same accounts. In order to login to, say, Facebook for the first time on a particular computer, you’ll need to provide a password and a “second factor,” usually an always-changing numeric code generated in an app or sent to you on another device. This makes it much harder for someone else to get into your account because it’s less likely they’ll have both a password and the temporary code.
This can be a little overwhelming to get started if you’re new to online privacy! Aside from our guides on Surveillance Self-Defense, we recommend taking a look at Consumer Reports’ Security Planner for ways to help you get started setting up your first password manager and turning on two-factor authentication.
Come back tomorrow for another tip!
This is the eighth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
In our Stop Censoring Abortion series, we’ve documented the many ways that reproductive rights advocates have faced arbitrary censorship on Meta platforms. Since social media is the primary—and sometimes the only—way that providers, advocates, and communities can safely and effectively share timely and accurate information about abortion, it’s vitally important that platforms take steps to proactively protect this speech.
Yet, even though Meta says its moderation policies allow abortion-related speech, its enforcement of those policies tells a different story. Posts are being wrongfully flagged, accounts are disappearing without warning, and important information is being removed without clear justification.
So what explains the gap between Meta’s public commitments and its actions? And how can we push platforms to be better—to, dare we say, #StopCensoringAbortion?
After reviewing nearly one-hundred submissions and speaking with Meta to clarify their moderation practices, here’s what we’ve learned.
First, given the current landscape—with some states trying to criminalize speech about abortion—you may be wondering how much leeway platforms like Facebook and Instagram have to choose their own content moderation policies. In other words, can social media companies proactively commit to stop censoring abortion?
The answer is yes. Social media companies, including Meta, TikTok, and X, have the constitutionally protected First Amendment right to moderate user content however they see fit. They can take down posts, suspend accounts, or suppress content for virtually any reason.
The Supreme Court explicitly affirmed this right in 2023 in Moody v. Netchoice, holding that social media platforms, like newspapers, bookstores, and art galleries before them, have the First Amendment right to edit the user speech that they host and deliver to other users on their platforms. The Court also established that the government has a very limited role in dictating what social media platforms must (or must not) publish. This editorial discretion, whether granted to individuals, traditional press, or online platforms, is meant to protect these institutions from government interference and to safeguard the diversity of the public sphere—so that important conversations and movements like this one have the space to flourish.
Unfortunately, Meta is failing to meet even these basic standards. Again and again, its policies say one thing while its actual enforcement says another.
Meta has stated its intent to allow conversations about abortion to take place on its platforms. In fact, as we’ve written previously in this series, Meta has publicly insisted that posts with educational content about abortion access should not be censored, even admitting in several public statements to moderation mistakes and over-enforcement. One spokesperson told the New York Times: “We want our platforms to be a place where people can access reliable information about health services, advertisers can promote health services and everyone can discuss and debate public policies in this space. . . . That’s why we allow posts and ads about, discussing and debating abortion.”
Meta’s platform policies largely reflect this intent. But as our campaign reveals, Meta’s enforcement of those policies is wildly inconsistent. Time and again, users—including advocacy organizations, healthcare providers, and individuals sharing personal stories—have had their content taken down even though it did not actually violate any of Meta’s stated guidelines. Worse, they are often left in the dark about what happened and how to fix it.
Arbitrary enforcement like this harms abortion activists and providers by cutting them off from their audiences, wasting the effort they spend creating resources and building community on these platforms, and silencing their vital reproductive rights advocacy. And it goes without saying that it hurts users, who need access to timely, accurate, and sometimes life-saving information. At a time when abortion rights are under attack, platforms with enormous resources—like Meta—have no excuse for silencing this important speech.
Our case studies have highlighted that when users can’t rely on platforms to apply their own rules fairly, the result is a widespread chilling effect on online speech. That’s why we are calling on Meta to adopt the following urgent changes.
Too often, platforms’ vague rules force users to guess what content might be flagged in order to avoid shadowbanning or worse, leading to needless self-censorship. To prevent this chilling effect, platforms should strive to offer users the greatest possible transparency and clarity on their policies. The policies should be clear enough that users know exactly what is allowed and what isn’t so that, for example, no one is left wondering how exactly a clip of women sharing their abortion experiences could be mislabeled as violent extremism.
If content doesn’t violate a platform’s stated policies, it should not be removed. And, per Meta’s own policies, an account should not be suspended for abortion-related content violations if it has not received any prior warnings or “strikes.” Yet as we’ve seen throughout this campaign, abortion advocates repeatedly face takedowns or even account suspensions of posts that fall entirely within Meta’s Community Standards. On such a massive scale, this selective enforcement erodes trust and chills entire communities from participating in critical conversations.
When content is removed, Meta tends to give vague, boilerplate explanations—or none at all. Instead, users facing takedowns or suspensions deserve detailed and accurate explanations that state the policy violated, reflect the reasoning behind the actual enforcement decision, and ways to appeal the decision. Clear explanations are key to preventing wrongful censorship and ensuring that platforms remain accountable to their commitments and to their users.
Every user deserves a real chance to challenge improper enforcement decisions and have them reversed. But based on our survey responses, it seems Meta’s appeals process is broken. Many users reported that they do not receive responses to appeals, even when the content did not violate Meta’s policies, and thus have no meaningful way to challenge takedowns. Alarmingly, we found that a user’s best (and sometimes only) chance at success is to rely on a personal connection at Meta to right wrongs and restore content. This is unacceptable. Users should have a reliable and efficient appeal process that does not depend on insider access.
Finally, automated systems cannot always handle the nuance of sensitive issues like reproductive health and advocacy. They misinterpret words, miss important cultural or political context, and wrongly flag legitimate advocacy as “dangerous.” Therefore, we call upon platforms to expand the role that human moderators play in reviewing auto-flagged content violations—especially when posts involve sensitive healthcare information or political expression.
Meta has already made the choice to allow speech about abortion on its platforms, and it has not hesitated to highlight that commitment whenever it has faced scrutiny. Now it’s time for Meta to put its money where its mouth is.
Users deserve better than a system where rules are applied at random, appeals go nowhere, and vital reproductive health information is needlessly (or negligently) silenced. If Meta truly values free speech, it must commit to moderating with fairness, transparency, and accountability.
This is the eighth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
The Essequibo River is the queen of rivers all!
Buddy-ta-na-na, we are somebody, oh!
The Essequibo River is the queen of rivers all!
Buddy-ta-na-na, we are somebody, oh!
Somebody, oh, Johnny! Somebody, oh!
Buddy-ta-na-na, we are somebody, oh!
– Sea shanty, presumed Guyanese
There is a lot of bad on the internet and it seems to only be getting worse. But one of the things the internet did well, and is worth preserving, is nontraditional paths for creativity, journalism, and criticism. As governments and major corporations throw up more barriers to expression—and more and more gatekeepers try to control the internet—it’s important to learn how to crash through those gates.
In EFF's interview series, Gate Crashing, we talk to people who have used the internet to take nontraditional paths to the very traditional worlds of journalism, creativity, and criticism. We hope it's both inspiring to see these people and enlightening for anyone trying to find voices they like online.
Our mini-series will be dropping an episode each month closing out 2025 in style.
Be sure to mark your calendar or check our socials on drop dates. If you have a friend or colleague that might be interested in watching our series, please forward this link: eff.org/gatecrashing
For over 35 years, EFF members have empowered attorneys, activists, and technologists to defend civil liberties and human rights online for everyone.
Tech should be a tool for the people, and we need you in this fight.
* This interview was originally published in December 2024. No changes have been made
Whether due to generative AI hallucinations or human sloppiness, the internet is increasingly rife with bogus news content—and you can count EFF among the victims.
WinBuzzer published a story June 26 with the headline, “Microsoft Is Getting Sued over Using Nearly 200,000 Pirated Books for AI Training,” containing this passage:
That quotation from EFF’s Corynne McSherry was cited again in two subsequent, related stories by the same journalist—one published July 27, the other August 27.
But the link in that original June 26 post was fake. Corynne McSherry never wrote such an article, and the quote was bogus.
Interestingly, we noted a similar issue with a June 13 post by the same journalist, in which he cited work by EFF Director of Cybersecurity Eva Galperin; this quote included the phrase “get-out-of-jail-free card” too.
Again, the link he inserted leads nowhere because Eva Galperin never wrote such a blog or white paper.
When EFF reached out, the journalist—WinBuzzer founder and editor-in-chief Markus Kasanmascheff—acknowledged via email that the quotes were bogus.
“This indeed must be a case of AI slop. We are using AI tools for research/source analysis/citations. I sincerely apologize for that and this is not the content quality we are aiming for,” he wrote. “I myself have noticed that in the particular case of the EFF for whatever reason non-existing quotes are manufactured. This usually does not happen and I have taken the necessary measures to avoid this in the future. Every single citation and source mention must always be double checked. I have been doing this already but obviously not to the required level.
“I am actually manually editing each article and using AI for some helping tasks. I must have relied too much on it,” he added.
It’s not an isolated incident. Media companies large and small are using AI to generate news content because it’s cheaper than paying for journalists’ salaries, but that savings can come at the cost of the outlets’ reputations.
The U.K.’s Press Gazette reported last month that Wired and Business Insider had to remove news features written by one freelance journalist after concerns the articles are likely AI-generated works of fiction: “Most of the published stories contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.”
And back in May, the Chicago Sun-Times had to apologize after publishing an AI-generated list of books that would make good summer reads—with 10 of the 15 recommended book descriptions and titles found to be “false, or invented out of whole cloth.”
As journalist Peter Sterne wrote for Nieman Lab in 2022:
Another potential risk of relying on large language models to write news articles is the potential for the AI to insert fake quotes. Since the AI is not bound by the same ethical standards as a human journalist, it may include quotes from sources that do not actually exist, or even attribute fake quotes to real people. This could lead to false or misleading reporting, which could damage the credibility of the news organization. It will be important for journalists and newsrooms to carefully fact check any articles written with the help of AI, to ensure the accuracy and integrity of their reporting.
(Or did he write that? Sterne disclosed in that article that he used OpenAI’s ChatGPT-3 to generate that paragraph, ironically enough.)
The Radio Television Digital News Association issued guidelines a few years ago for the use of AI in journalism, and the Associated Press is among many outlets that have developed guidelines of their own. The Poynter Institute offers a template for developing such policies.
Nonetheless, some journalists or media outlets have been caught using AI to generate stories including fake quotes; for example, the Associated Press reported last year that a Wyoming newspaper reporter had filed at least seven stories that included AI-generated quotations from six people.
WinBuzzer wasn’t the only outlet to falsely quote EFF this year. An April 19 article in Wander contained another bogus quotation from Eva Galperin:
An email to the outlet demanding the article’s retraction went unanswered.
In another case, WebProNews published a July 24 article quoting Eva Galperin under the headline “Risika Data Breach Exposes 100M Swedish Records to Fraud Risks,” but Eva confirmed she’d never spoken with them or given that quotation to anyone. The article no longer seems to exist on the outlet’s own website, but it was captured by the Internet Archive’s Wayback Machine.
A request for comment made through WebProNews’ “Contact Us” page went unanswered, and then they did it again on September 2, this time misattributing a statement to Corynne McSherry:
No such article in The Verge seems to exist, and the statement is not at all in line with EFF’s stance.
The top prize for audacious falsity goes to a June 18 article in the Arabian Post, since removed from the site after we flagged it to an editor. The Arabian Post is part of the Hyphen Digital Network, which describes itself as “at the forefront of AI innovation” and offering “software solutions that streamline workflows to focus on what matters most: insightful storytelling.” The article in question included this passage:
Privacy advocate Linh Nguyen from the Electronic Frontier Foundation remarked that community monitoring tools are playing a civic role, though she warned of the potential for misinformation. “Crowdsourced neighbourhood policing walks a thin line—useful in forcing transparency, but also vulnerable to misidentification and fear-mongering,” she noted in a discussion on digital civil rights.
Nobody at EFF recalls anyone named Linh Nguyen ever having worked here, nor have we been able to find anyone by that name who works in the digital privacy sector. So not only was the quotation fake, but apparently the purported source was, too.
Now, EFF is all about having our words spread far and wide. Per our copyright policy, any and all original material on the EFF website may be freely distributed at will under the Creative Commons Attribution 4.0 International License (CC-BY), unless otherwise noted.
But we don't want AI and/or disreputable media outlets making up words for us. False quotations that misstate our positions damage the trust that the public and more reputable media outlets have in us.
If you're worried about this (and rightfully so), the best thing a news consumer can do is invest a little time and energy to learn how to discern the real from the fake. It’s unfortunate that it's the public’s burden to put in this much effort, but while we're adjusting to new tools and a new normal, a little effort now can go a long way.
As we’ve noted before in the context of election misinformation, the nonprofit journalism organization ProPublica has published a handy guide about how to tell if what you’re reading is accurate or “fake news.” And the International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends:
This is the seventh installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
For users hoping to promote or boost an abortion-related post on Meta platforms, the Community Standards are just step one. While the Community Standards apply to all posts, paid posts and advertisements must also comply with Meta's Advertising Standards. It’s easy to understand why Meta places extra requirements on paid content. In fact, their “advertising policy principles” outline several important and laudable goals, including promoting transparency and protecting users from scams, fraud, and unsafe and discriminatory practices.
But additional standards bring additional content moderation, and with that comes increased potential for user confusion and moderation errors. Meta’s ad policies, like its enforcement policies, are vague on a number of important questions. Because of this, it’s no surprise that Meta's ad policies repeatedly came up as we reviewed our Stop Censoring Abortion submissions.
There are two important things to understand about these ad policies. First, the ad policies do indeed impose stricter rules on content about abortion—and specifically medication abortion—than Meta’s Community Standards do. To help users better understand what is and isn’t allowed, we took a closer look at the policies and what Meta has said about them.
Second, despite these requirements, the ad policies do not categorically block abortion-related posts from being promoted as ads. In other words, while Meta’s ad policies introduce extra hurdles, they should not, in theory, be a complete barrier to promoting abortion-related posts as boosted content. Still, our analysis revealed that Meta is falling short in several areas.
When EFF asked Meta about potential ad policy violations, the company first pointed to its Drugs and Pharmaceuticals policy. In the abortion care context, this policy applies to paid content specifically about medication abortion and use of abortion pills. Ads promoting these and other prescription drugs are permitted, but there are additional requirements:
Understanding what counts as “promoting prescription drugs” is where things get murky. Crucially, the written policy states that advertisers do not need authorization to run ads that “educate, advocate or give public service announcements related to prescription drugs” or that “promote telehealth services generally.” This should, in theory, leave a critical opening for abortion advocates focused on education and advocacy rather than direct prescription drug sales.
But Meta told EFF that advertisers “must obtain authorization to post ads discussing medical efficacy, legality, accessibility, affordability, and scientific merits and restrict these ads to adults aged 18 or older.” Yet many of these topics—medical efficacy, legality, accessibility—are precisely what educational content and advocacy often address. Where’s the line? This vagueness makes it difficult for abortion pill advocates to understand what’s actually permitted.
Meta also told EFF that its Ads about Social Issues, Elections or Politics policy may apply to a range of abortion-related content. Under this policy, advertisers within certain countries—including the U.S.—must meet several requirements before running ads about certain “social issues.” Requirements include:
While certain news publishers are exempt from the policy, it otherwise applies to a wide range of accounts, including activists, brands, non-profit groups and political organizations.
Meta defines “social issues” as “sensitive topics that are heavily debated, may influence the outcome of an election or result in/relate to existing or proposed legislation.” What falls under this definition differs by country, and Meta provides country-specific topics lists and examples. In the U.S. and several other countries, ads that include “discussion, debate, or advocacy for or against...abortion services and pro-choice/pro-life advocacy” qualify as social issues ads under the “Civil and Social Rights” category.
Confusingly, Meta differentiates this from ads that primarily sell a product or promote a service, which do not require authorization or disclaimers, even if the ad secondarily includes advocacy for an issue. For instance, according to Meta's examples, an ad that says, “How can we address systemic racism?” counts as a social issues ad and requires authorization and disclaimers. On the other hand, an ad that says, “We have over 100 newly-published books about systemic racism and Black History now on sale” primarily promotes a product, and would not require authorization and disclaimers. But even with Meta's examples, the line is still blurry. This vagueness invites confusion and content moderation errors.
Oddly, Meta never specifically identified its Health and Wellness ad policy to EFF, though the policy is directly relevant to abortion-related paid content. This policy addresses ads about reproductive health and family planning services, and requires ads regarding “abortion medical consultation and related services” to be targeted at users 18 and older. It also expressly states that for paid content involving “[r]eproductive health and wellness drugs or treatments that require prescription,” accounts must comply with both this policy and the Drugs and Pharmaceuticals policy.
This means abortion advocates must navigate the Drugs and Pharmaceuticals policy, the Social Issues policy, and the Health and Wellness policy—each with its own requirements and authorization processes. That Meta didn’t mention this highly relevant policy when asked about abortion advertising underscores how confusingly dispersed these rules are.
Like the Drugs policy, the Health and Wellness policy contains an important education exception for abortion advocates: The age-targeting requirements do not apply to “[e]ducational material or information about family planning services without any direct promotion or facilitation of the services.”
Meta's complex policies create fertile ground for automated moderation errors. Our Stop Censoring Abortion survey submissions revealed that Meta's systems repeatedly misidentified educational abortion content as Community Standards violations. The same over-moderation problems are also a risk in the advertising context.
On top of that, content moderation errors even on unpaid posts can trigger advertising restrictions and penalties. Meta's advertising restrictions policy states that Community Standards violations can result in restricted advertising features or complete advertising bans. This creates a compounding problem when educational content about abortion is wrongly flagged. Abortion advocates could face a double penalty: first their content is removed, then their ability to advertise is restricted.
This may be, in part, what happened to Red River Women's Clinic, a Minnesota abortion clinic we wrote about earlier in this series. When its account was incorrectly suspended for violating the “Community Standards on drugs,” the clinic appealed and eventually reached out to a contact at Meta. When Meta finally removed the incorrect flag and restored the account, Red River received a message informing them they were no longer out of compliance with the advertising restrictions policy.
Screenshot submitted by Red River Women's Clinic to EFF
Our review of the ad policies and survey submissions showed that there is room for improvement in how Meta handles abortion-related advertising.
First, Meta should clarify what is permitted without prior authorization under the Drugs and Pharmaceuticals policy. As noted above, the policies say advertisers do not need authorization to “educate, advocate or give public service announcements,” but Meta told EFF authorization is needed to promote posts discussing “medical efficacy, legality, accessibility, affordability, and scientific merits.” Users should be able to more easily determine what content falls on each side of that line.
Second, Meta should clarify when its Social Issues policy applies. Does discussing abortion at all trigger its application? Meta says the policy excludes posts primarily advertising a service, yet this is not what survey respondent Lynsey Bourke experienced. She runs the Instagram account Rouge Doulas, a global abortion support collective and doula training school. Rouge Doulas had a paid post removed under this very policy for advertising something that is clearly a service: its doula training program called “Rouge Abortion Doula School.” The policy’s current ambiguity makes it difficult for advocates to create compliant content with confidence.
Third, and as EFF has previously argued, Meta should ensure its automated system is not over-moderating. Meta must also provide a meaningful appeals process for when errors inevitably occur. Automated systems are blunt tools and are bound to make mistakes on complex topics like abortion. But simply using an image of a pill on an educational post shouldn’t automatically trigger takedowns. Improving automated moderation will help correct the cascading effect of incorrect Community Standards flags triggering advertising restrictions.
With clearer policies, better moderation, and a commitment to transparency, Meta can make it easier for accounts to share and boost vital reproductive health information.
This is the seventh post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
As the first copyright cases concerning AI reach appeals courts, EFF wants to protect important, beneficial uses of this technology—including AI for legal research. That’s why we weighed in on the long-running case of Thomson Reuters v. ROSS Intelligence. This case raises at least two important issues: the use of (possibly) copyrighted material to train a machine learning AI system, and public access to legal texts.
ROSS Intelligence was a legal research startup that built an AI-based tool for locating judges’ written opinions based on natural language queries—a competitor to ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. To build its tool, ROSS hired another firm to read through thousands of the “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. ROSS used those paraphrases to train its tool. Importantly, the ROSS tool didn’t output any West headnotes, or even the paraphrases of those headnotes—it simply directed the user to the original judges’ decisions. Still, Thomson sued ROSS for copyright infringement, arguing that using the headnotes without permission was illegal.
Early decisions in the suit were encouraging. EFF wrote about how the court allowed ROSS to bring an antitrust counterclaim against Thomson Reuters, letting them try to prove that Thomson was abusing monopoly power. And the trial judge initially ruled that ROSS’s use of the West headnotes was fair use under copyright law.
The case then took turns for the worse. ROSS was unable to prove its antitrust claim. The trial judge issued a new opinion reversing his earlier decision and finding that ROSS’s use was not fair but rather infringed Thomson’s copyrights. And in the meantime, ROSS had gone out of business (though it continues to defend itself in court).
The court’s new decision on copyright was particularly worrisome. It ruled that West headnotes—a few lines of text copying or summarizing a single legal conclusion from a judge’s written opinion—could be copyrighted, and that using them to train the ROSS tool was not fair use, in part because ROSS was a competitor to Thomson Reuters. And the court rejected ROSS’s attempt to avoid any illegal copying by using a “clean room” procedure often used in software development. The decision also threatens to limit the public’s access to legal texts.
EFF weighed in with an amicus brief joined by the American Library Association, the Association of Research Libraries, the Internet Archive, Public Knowledge, and Public.Resource.Org. We argued that West headnotes are not copyrightable in the first place, since they simply restate individual points from judges’ opinions with no meaningful creative contributions. And even if copyright does attach to the headnotes, we argued, the source material is entirely factual statements about what the law is, and West’s contribution was minimal, so fair use should have tipped in ROSS’s favor. The trial judge had found that the factual nature of the headnotes favored ROSS, but dismissed this factor as unimportant, effectively writing it out of the law.
This case is one of the first to touch on copyright and AI, and is likely to influence many of the other cases that are already pending (with more being filed all the time). That’s why we’re trying to help the appeals court get this one right. The law should encourage the creation of AI tools to digest and identify facts for use by researchers, including facts about the law.