Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

The War on Encryption Is Dangerous

By: Nick Heer
25 March 2025 at 23:58

Meredith Whittaker, president of Signal — which has recently been in the news — in an op-ed for the Financial Times:

The UK is part and parcel of a dangerous trend that threatens the cyber security of our global infrastructures. Legislators in Sweden recently proposed a law that would force communication providers to build back door vulnerabilities. France is poised to make the same mistake when it votes on the inclusion of “ghost participants” in secure conversations via back doors. “Chat control” legislation haunts Brussels.

There is some good news: French legislators ultimately rejected this provision.

⌥ Permalink

⌥ Ambiguity and Trust in Apple Intelligence

By: Nick Heer
5 December 2024 at 04:59

Spencer Ackerman has been a national security reporter for over twenty years, and was partially responsible for the Guardian’s coverage of NSA documents leaked by Edward Snowden. He has good reason to be skeptical of privacy claims in general, and his experience updating his iPhone made him worried:

Recently, I installed Apple’s iOS 18.1 update. Shame on me for not realizing sooner that I should be checking app permissions for Siri — which I had thought I disabled as soon as I bought my device — but after installing it, I noticed this update appeared to change Siri’s defaults.

Apple has a history with changing preferences and dark patterns. This is particularly relevant in the case of the iOS 18.1 update because it was the one with Apple Intelligence, which creates new ambiguity between what is happening on-device and what goes to a server farm somewhere.

Allen Pike:

While easy tasks are handled by their on-device models, Apple’s cloud is used for what I’d call moderate-difficulty work: summarizing long emails, generating patches for Photos’ Clean Up feature, or refining prose in response to a prompt in Writing Tools. In my testing, Clean Up works quite well, while the other server-driven features are what you’d expect from a medium-sized model: nothing impressive.

Users shouldn’t need to care whether a task is completed locally or not, so each feature just quietly uses the backend that Apple feels is appropriate. The relative performance of these two systems over time will probably lead to some features being moved from cloud to device, or vice versa.

It would be nice if it truly did not matter — and, for many users, the blurry line between the two is probably fine. Private Cloud Compute seems to be trustworthy. But I fully appreciate Ackerman’s worries. Someone in his position necessarily must understand what is being stored and processed in which context.

However, Ackerman appears to have interpreted this setting change incorrectly:

I was alarmed to see that even my secure communications apps, like Proton and Signal, were toggled by default to “Learn from this App” and enable some subsidiary functions. I had to swipe them all off.

This setting was, to Ackerman, evidence of Apple “uploading your data to its new cloud-based AI project”, which is a reasonable assumption at a glance. Apple, like every technology company in the past two years, has decided to loudly market everything as being connected to its broader A.I. strategy. In launching these features in a piecemeal manner, though, it is not clear to a layperson which parts of iOS are related to Apple Intelligence, let alone where those interactions are taking place.

However, this particular setting is nearly three years old and unrelated to Apple Intelligence. This is related to Siri Suggestions which appear throughout the system. For example, the widget stack on my home screen suggests my alarm clock app when I charge my iPhone at night. It suggests I open the Microsoft Authenticator app on weekday mornings. When I do not answer the phone for what is clearly a scammer, it suggests I return the missed call. It is not all going to be gold.

Even at the time of its launch, its wording had the potential for confusion — something Apple has not clarified within the Settings app in the intervening years — and it seems to have been enabled by default. While this data may play a role in establishing the “personal context” Apple talks about — both are part of the App Intents framework — I do not believe it is used to train off-device Apple Intelligence models. However, Apple says this data may leave the device:

Your personal information — which is encrypted and remains private — stays up to date across all your devices where you’re signed in to the same Apple Account. As Siri learns about you on one device, your experience with Siri is improved on your other devices. If you don’t want Siri personalization to update across your devices, you can disable Siri in iCloud settings. See Keep what Siri knows about you up to date on your Apple devices.

While I believe Ackerman is incorrect about the setting’s function and how Apple handles its data, I can see how he interpreted it that way. The company is aggressively marketing Apple Intelligence, even though it is entirely unclear which parts of it are available, how it is integrated throughout the company’s operating systems, and which parts are dependent on off-site processing. There are people who really care about these details, and they should be able to get answers to these questions.

All of this stuff may seem wonderful and novel to Apple and, likely, many millions of users. But there are others who have reasonable concerns. Like any new technology, there are questions which can only be answered by those who created it. Only Apple is able to clear up the uncertainty around Apple Intelligence, and I believe it should. A cynical explanation is that this ambiguity is all deliberate because Apple’s A.I. approach is so much slower than its competitors and, so, it is disincentivized from setting clear boundaries. That is possible, but there is plenty of trust to be gained by being upfront now. Americans polled by Pew Research and Gallup have concerns about these technologies. Apple has repeatedly emphasized its privacy bonafides. But these features remain mysterious and suspicious for many people regardless of how much a giant corporation swears it delivers “stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency”.

All of that is nice, I am sure. Perhaps someone at Apple can start the trust-building by clarifying what the Siri switch does in the Settings app, though.

Cybersecurity Breach at Calgary Public Library

By: Nick Heer
12 October 2024 at 19:13

CBC News:

All Calgary Public Library locations closed early on Friday after a cybersecurity breach compromised some systems, according to a spokesperson.

All locations were shut down as of 5 p.m.

Between this and the Internet Archive, criminals are picking terrible targets this week. I am not saying there are acceptable attacks, but these ones are particularly cruel.

⌥ Permalink

The Internet Archive Is Under DDoS Attack

By: Nick Heer
9 October 2024 at 23:28

Jason Scott:

Someone is DDOSing the internet archive, so we’ve been down for hours. According to their twitter, they’re doing it just to do it. Just because they can. No statement, no idea, no demands.

An X account claiming responsibility says it is a politically motivated attack. If that is true, it is an awfully stupid rationale and a poor choice of target.

Wes Davis, the Verge:

Here’s what the popup said:

“Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!”

HIBP refers to Have I Been Pwned?, a website where people can look up whether or not their information has been published in data leaked from cyber attacks. It’s unclear what is happening with the site, but attacks on services like TweetDeck have exploited XSS or cross-site scripting vulnerabilities with similar effects.

I have no idea if this group actually obtained any Internet Archive user data. The site has only a placeholder page directing visitors to its X account for status updates, but I see nothing there or on Brewster Kahle’s personal one.

Update: Three minutes after publishing this post, I received an alert from Have I Been Pwned that my Internet Archive account was one of over 31 million total which had been exposed. Troy Hunt, who runs HIBP, and Lawrence Abrams of Bleeping Computer both tried contacting the Internet Archive with no response.

⌥ Permalink

WSJ: U.S. Wiretap Systems Targeted in China-Linked Hack

By: Nick Heer
9 October 2024 at 02:51

Sarah Krouse, Dustin Volz, Aruna Viswanatha, and Robert McMillan, Wall Street Journal (probably paywalled; sorry):

A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers, potentially accessing information from systems the federal government uses for court-authorized network wiretapping requests.

For months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data, according to people familiar with the matter, which amounts to a major national security risk. The attackers also had access to other tranches of more generic internet traffic, they said.

Zack Whittaker, TechCrunch:

The 30-year-old law that set the stage for recent backdoor abuse is the Communications Assistance for Law Enforcement Act, or CALEA, which became law in 1994 at a time when cell phones were a rarity and the internet was still in its infancy.

CALEA requires that any “communications provider,” such as a phone company or internet provider, must provide the government all necessary assistance to access a customer’s information when presented with a lawful order. In other words, if there is a means to access a customer’s data, the phone companies and internet providers must provide it.

Bruce Schneier:

For years, the security community has pushed back against these backdoors, pointing out that the technical capability cannot differentiate between good guys and bad guys. And here is one more example of a backdoor access mechanism being targeted by the “wrong” eavesdroppers.

Riana Pfefferkorn:

It is not the ‘90s anymore, when CALEA got enacted, the law requiring telecom wiretappability for law enforcement. China and Russia and DPRK are formidable cyber foes now. DOJ, FBI, etc. want to change CALEA so that encrypted apps like Signal or WhatsApp aren’t exempt from it anymore. But this hack shows that if anything, the law needs to change in the *other* direction. The hack needs to be a wake-up call to law enforcement that as long as they keep opposing encryption for communications, they’re enabling China to smack us in the face with our own hand while saying “stop hitting yourself!”

According to a 2016 paper from Public Safety Canada, “Australia, the U.S., the UK and many other European nations require CSPs [Communications Service Providers] to have an interception capability”; it also notes Canada does not. Such a requirement is understandable from an investigative perspective. But, as Pfefferkorn says, capabilities like these have been exploited before, and it will happen again. These are big targets and there are no safe backdoors.

That brings me — for the second time today — to the need for comprehensive privacy legislation basically everywhere but, in particular, in the United States, the hub of the world’s communications. Protecting private data would dramatically curtail this kind of access violation by removing backdoors, restrict one aspect of TikTok panic, and reduce the exploitation of our behavioural data by creepy ad tech businesses. It is not a panacea and I am sure there are worrisome side effects for law enforcement, but it would likely be more effective than tackling these problems on an individual basis.

⌥ Permalink

Apple’s Permissions Features Are Out of Balance

By: Nick Heer
8 August 2024 at 04:00

Jason Snell, Six Colors:

Apple’s recent feature changes suggest a value system that’s wildly out of balance, preferring to warn (and control) users no matter how damaging it is to the overall user experience. Maybe the people in charge should be forced to sit down and watch that Apple ad that mocks Windows Vista. Vista’s security prompts existed for good reasons — but they were a user disaster. The Apple of that era knew it. I’d guess a lot of people inside today’s Apple know it, too — but they clearly are unable to win the arguments when it matters.

The first evidence of this relentless slog of permissions prompts occurred on iOS. Want to allow this app to use the camera? Tap allow. See your location? Tap allow. Access your contacts? Tap allow. Send you notifications? Tap allow. On and on it goes, sweeping up the Mac in this relentless offloading of responsibility onto users.

On some level, I get it. Our devices are all synced with one another, passing our identities and secret information between them constantly. We install new applications without thinking too much about what they could be doing in the background. We switch on automatic updates with similar indifference. (If you are somebody who does not do these things, please do not write. I know you are there; I respect you; you are one of few.)

But relentless user confirmation is not a good answer for privacy, security, or competition. It merely kicks the can down the road, and suggests users cannot be trusted, yet must bear all the responsibility for their choices.

⌥ Permalink

MacOS Sequoia Raises the Gatekeeper Walls

By: Nick Heer
6 August 2024 at 23:46

Apple, in a Developer News bulletin:

In macOS Sequoia, users will no longer be able to Control-click to override Gatekeeper when opening software that isn’t signed correctly or notarized. They’ll need to visit System Settings > Privacy & Security to review security information for software before allowing it to run.

This is one of those little things which will go unnoticed by most users, but will become a thorn in the side of anyone who relies on it. These are likely developers and other people who are more technologically literate placed in the position of increasingly fighting with the tools they use to get things done. It may be a small thing, but small things add up.

Update: The weekly permission prompts for screen and audio recording, on the other hand, might be noticed by a lot more people.

⌥ Permalink

⌥ The Fight for End-to-End Encryption Is Worldwide

By: Nick Heer
22 June 2024 at 01:45

Since 2022, the European Parliament has been trying to pass legislation requiring digital service providers to scan for and report CSAM as it passes through their services.

Giacomo Zandonini, Apostolis Fotiadis, and Luděk Stavinoha, Balkan Insight, with a good summary in September:

Welcomed by some child welfare organisations, the regulation has nevertheless been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications from prying eyes.

[…]

The proposed regulation is excessively “influenced by companies pretending to be NGOs but acting more like tech companies”, said Arda Gerkens, former director of Europe’s oldest hotline for reporting online CSAM.

This is going to require a little back-and-forth, and I will pick up the story with quotations from Matthew Green’s introductory remarks to a panel before the European Internet Services Providers Association in March 2023:

The only serious proposal that has attempted to address this technical challenge was devised — and then subsequently abandoned — by Apple in 2021. That proposal aimed only at detecting known content using a perceptual hash function. The company proposed to use advanced cryptography to “split” the evaluation of hash comparisons between the user’s device and Apple’s servers: this ensured that the device never received a readable copy of the hash database.

[…]

The Commission’s Impact Assessment deems the Apple approach to be a success, and does not grapple with this failure. I assure you that this is not how it is viewed within the technical community, and likely not within Apple itself. One of the most capable technology firms in the world threw all their knowledge against this problem, and were embarrassed by a group of hackers: essentially before the ink was dry on their proposal.

Daniel Boffey, the Guardian, in May 2023:

Now leaked internal EU legal advice, which was presented to diplomats from the bloc’s member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year.

The European Parliament in a November 2023 press release:

In the adopted text, MEPs excluded end-to-end encryption from the scope of the detection orders to guarantee that all users’ communications are secure and confidential. Providers would be able to choose which technologies to use as long as they comply with the strong safeguards foreseen in the law, and subject to an independent, public audit of these technologies.

Joseph Menn, Washington Post, in March, reporting on the results of a European court ruling:

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”

This is not directly about the proposed CSAM measures, but it is precedent for European regulators to follow.

Natasha Lomas, TechCrunch, this week:

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Meredith Whittaker, CEO of Signal, issued a PDF statement criticizing the proposal:

Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.

Patrick Breyer, of Germany’s Pirate Party:

Only Germany, Luxembourg, the Netherlands, Austria and Poland are relatively clear that they will not support the proposal, but this is not sufficient for a “blocking minority”.

Ella Jakubowska on X:

The exact quote from [Věra Jourová] the Commissioner for Values & Transparency: “the Commission proposed the method or the rule that even encrypted messaging can be broken for the sake of better protecting children”

Věra Jourová on X, some time later:

Let me clarify one thing about our draft law to detect online child sexual abuse #CSAM.

Our proposal is not breaking encryption. Our proposal preserves privacy and any measures taken need to be in line with EU privacy laws.

Matthew Green on X:

Coming back to the initial question: does installing surveillance software on every phone “break encryption”? The scientist in me squirms at the question. But if we rephrase as “does this proposal undermine and break the *protections offered by encryption*”: absolutely yes.

Maïthé Chini, the Brussels Times:

It was known that the qualified majority required to approve the proposal would be very small, particularly following the harsh criticism of privacy experts on Wednesday and Thursday.

[…]

“[On Thursday morning], it soon became clear that the required qualified majority would just not be met. The Presidency therefore decided to withdraw the item from today’s agenda, and to continue the consultations in a serene atmosphere,” a Belgian EU Presidency source told The Brussels Times.

That is a truncated history of this piece of legislation: regulators want platform operators to detect and report CSAM; platforms and experts say that will conflict with security and privacy promises, even if media is scanned prior to encryption. This proposal may be specific to the E.U., but you can find similar plans to curtail or invalidate end-to-end encryption around the world:

I selected English-speaking areas because that is the language I can read, but I am sure there are more regions facing threats of their own.

We are not served by pretending this threat is limited to any specific geography. The benefits of end-to-end encryption are being threatened globally. The E.U.’s attempt may have been pushed aside for now, but another will rise somewhere else, and then another. It is up to civil rights organizations everywhere to continue arguing for the necessary privacy and security protections offered by end-to-end encryption.

ProPublica: Microsoft Refused to Fix Flaw Years Before SolarWinds Hack

By: Nick Heer
14 June 2024 at 05:31

Renee Dudley and Doris Burke, reporting for ProPublica which is not, contrary to the opinion of one U.S. Supreme Court jackass justice, “very well-funded by ideological groups” bent on “look[ing] for any little thing they can find, and they try[ing] to make something out of it”, but is instead a distinguished publication of investigative journalism:

Microsoft hired Andrew Harris for his extraordinary skill in keeping hackers out of the nation’s most sensitive computer networks. In 2016, Harris was hard at work on a mystifying incident in which intruders had somehow penetrated a major U.S. tech company.

[…]

Early on, he focused on a Microsoft application that ensured users had permission to log on to cloud-based programs, the cyber equivalent of an officer checking passports at a border. It was there, after months of research, that he found something seriously wrong.

This is a deep and meaningful exploration of Microsoft’s internal response to the conditions that created 2020’s catastrophic SolarWinds breach. It seems that both Microsoft and the Department of Justice knew well before anyone else — perhaps as early as 2016 in Microsoft’s case — yet neither did anything with that information. Other things were deemed more important.

Perhaps this was simply a multi-person failure in which dozens of people at Microsoft could not see why Harris’ discovery was such a big deal. Maybe they all could not foresee this actually being exploited in the wild, or there was a failure to communicate some key piece of information. I am a firm believer in Hanlon’s razor.

On the other hand, the deep integration of Microsoft’s entire product line into sensitive systems — governments, healthcare, finance — magnifies any failure. The incompetence of a handful of people at a private corporation should not result in 18,000 infected networks.

Ashley Belanger, Ars Technica:

Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.

[…]

Microsoft did not dispute ProPublica’s report. Instead, the company provided a statement that almost seems to contradict Smith’s testimony to Congress today by claiming that “protecting customers is always our highest priority.”

Microsoft’s public relations staff can say anything they want. But there is plenty of evidence — contemporary and historic — showing this is untrue. Can it do better? I am sure Microsoft employs many intelligent and creative people who desperately want to change this corrupted culture. Will it? Maybe — but for how long is anybody’s guess.

⌥ Permalink

Inside the Copilot Recall ‘Disaster’

By: Nick Heer
3 June 2024 at 17:53

Kevin Beaumont:

At a surface level, it [Recall] is great if you are a manager at a company with too much to do and too little time as you can instantly search what you were doing about a subject a month ago.

In practice, that audience’s needs are a very small (tiny, in fact) portion of Windows userbase — and frankly talking about screenshotting the things people in the real world, not executive world, is basically like punching customers in the face. The echo chamber effect inside Microsoft is real here, and oh boy… just oh boy. It’s a rare misfire, I think.

Via Eric Schwarz:

This fact that this feature is basically on by default and requires numerous steps to disable is going to create a lot of problems for people, especially those who click through every privacy/permission screen and fundamentally don’t know how their computer actually operates — I’ve counted way too many instances where I’ve had to help people find something and they have no idea where anything lives in their file system (mostly work off the Desktop or Downloads folders). How are they going to even grapple with this?

The problems with Recall remind me of the minor 2017 controversy around “brassiere” search results in Apple’s Photos app. Like Recall, it is entirely an on-device process with some security and privacy protections. In practice, automatically cataloguing all your photos which show a bra is kind of creepy, even if it is being done only with your own images on your own phone.

⌥ Permalink

Microsoft Recall

By: Nick Heer
23 May 2024 at 01:29

Yusuf Mehdi of Microsoft:

Now with Recall, you can access virtually what you have seen or done on your PC in a way that feels like having photographic memory. Copilot+ PCs organize information like we do – based on relationships and associations unique to each of our individual experiences. This helps you remember things you may have forgotten so you can find what you’re looking for quickly and intuitively by simply using the cues you remember.

[…]

Recall leverages your personal semantic index, built and stored entirely on your device. Your snapshots are yours; they stay locally on your PC. You can delete individual snapshots, adjust and delete ranges of time in Settings, or pause at any point right from the icon in the System Tray on your Taskbar. You can also filter apps and websites from ever being saved. You are always in control with privacy you can trust.

Recall is the kind of feature I have always wanted but I am not sure I would ever enable. Setting aside Microsoft’s recent high-profile security problems, it seems like there is a new risk in keeping track of everything you see on your computer — bank accounts, a list of passwords, messages, work documents and other things sent by a third-party which they expect to be confidential, credit card information — for a rolling three month window.

Microsoft says all the right things about this database. It says it is all stored locally, never shared with Microsoft, access controlled, and user configurable. And besides, screen recorders have existed forever, and keeping local copies of sensitive information has always been a balance of risk.

But this is a feature that creates a rolling record of just about everything. It somehow feels more intrusive than a web browser’s history and riskier than a password manager. The Recall directory will be a new favourite target for malware. Oh and, in addition to Microsoft’s own security issues, we have just seen a massive breach of LastPass. Steal now, solve later.

This is a brilliant, deeply integrated service. It is the kind of thing I often need as I try to remember some article I read and cannot quite find it with a standard search engine. Yet even though I already have my credit cards and email and passwords stored on my computer, something about a screenshot timeline is a difficult mental hurdle to clear — not entirely rationally, but not irrationally either.

⌥ Permalink

Defective by Design: Reflected Attacks on Email Privacy using DMARC

18 June 2020 at 16:00

Email privacy is dead, confirmed for the umpteenth time. In this post I present the SOILED-PRIVACY attacks (Systemic Online Information Leakage using Email+DMARC against Privacy). These are two reflected attacks against email infrastructure allowing an attacker to access private knowledge about a target user, transparent to mail forwarding, mailing lists, and web services.

Many email users choose to use an address under a generic domain under the control of a commercial email-service provider, their ISP, or a cooperative operator; however there is also a large proportion of email users that choose an address under a domain they or their organisation owns. There are many advantages to using a domain under your own control, including (but by no means limited to): portability between email service providers; increased personal or corporate branding potential; and improved user choice or privacy as a result of self-hosted infrastructure.

However, a custom domain also necessarily entails some privacy downsides: as there is a trivial mapping between personality and domain name, it is possible for any person or service sending mails to an address under the domain to identify its owner. Depending on the use case of the emails being sent, this may not be a particular problem or may even be an advantage. However, in many cases this is a serious downside - very often users do not want to be identified personally by all the services they use or content-providers they subscribe to. These downsides have led to the development of various workarounds to protect user privacy. Such workarounds include:

  • Anonymous or single-use forwarding addresses.
  • Mailing lists not publishing lists of recipient addresses.
  • Web services proxying user-user mail (e.g. with a web form or forwarding address).

Users of custom domains have come to trust that these are effective at protecting their private activities, and limiting their trust boundary to the forwarding address operator, mailing list service or trusted web service rather than having to trust the whole of the internet or all other users of the services they use. The attacks presented mean that the above workarounds are not effective solutions and do not provide privacy of recipient, and therefore should not be used without either user knowledge of the attacks or relevant mitigations being applied.

DMARC: Defective by Design

DMARC is a internet technology that aims to combat email forgery and reduce the incidence of spam being sent with forged From addresses. It achieves this by standardising a DNS TXT record that can be attached to a domain by its owner. A server that receives an email can fetch the record for the domain in the From header of the email. DMARC gives the receiving server guidance on what to do when DKIM and SPF checks fail - to delete, quarantine (typically place in the spam folder), or to do nothing. Although DMARC has been subject to valid criticisms (particularly because it breaks many existing mailing lists and forwarding systems), it is a valuable tool in the fight against forged transactional email and spam.

As well as giving guidance to receiving servers, DMARC also establishes a feedback loop: receiving servers are encouraged to send reports detailing successful and failed forgery checks back to a location nominated by the owner of the From domain. In theory, this allows email senders to track down and fix errors in the infrastructure causing unwarranted SPF or DKIM failures.

However, in practice this opens up significant privacy issues which have not been communicated properly to the internet at large. I quote from the DMARC specification, RFC 7489:

When message-forwarding arrangements exist, Domain Owners requesting reports will also receive information about mail forwarded to domains that were not originally part of their messages’ recipient lists. This means that destination domains previously unknown to the Domain Owner may now become visible.

The attacks presented in the next section exploit this specified behaviour, which is inherent to DMARC aggregate reports sent by a very large proportion of email service providers. The providers sending these reports vary in size from individuals’ personal mail servers to behemoths like GMail and yahoo. Forensic reports are even worse, again quoting from the specification:

Failed-message reporting provides message-specific details pertaining to authentication failures. Individual reports can contain message content as well as trace header fields. […]

although the [format] used for failed-message reporting supports redaction, failed-message reporting is capable of exposing the entire message to the report recipient.

Luckily very few email providers send DMARC forensic reports, for obvious reasons: not only the breach of privacy inherent in sending detailed reports but also the potential for amplified (or even self-reinforcing) backscatter DDoS attacks and the cost associated with significantly increased processing effort on their own machines.

Attack 1: Reflected Self-Exfiltration using Aggregate DMARC

This is the simplest case, when an attacker arranges that an email is sent with their domain as the From, and elects to receive DMARC aggregate reports. The attack email (which can have any content or none, and could even be rejected by the server per DMARC rules) is sent to an address that does not personally identify the victim, but forwards to a domain that does. This could be a mailing list, anonymous forwarding address, or web service.

When the victim’s domain’s mailserver receives the email, and dutifully returns the DMARC aggregate report, their personality will be revealed.

Example 1:

malicious@attacker.example -> xxx@mailinglists.example -> itsme@alice.example

The attacker sends an email to a mailing list submission address, xxx@mailinglists.example. The attacker does not know what the recipients of their mail will be, because the mailing list does not publish subscriber information. The attacker’s email is forwarded to itsme@alice.example, an address that identifies the victim, Alice.

The receiving server for alice.example then send a DMARC report to the attacker, identifying alice.example as the sender of the report. The attacker now knows that Alice subscribes to the target mailing list.

Example 2:

malicious@attacker.example -> totallynotbob@gmail.com -> hi@bobworld.example

The attacker sends an email to an unknown address that does not identify the victim, here totallynotbob@gmail.com. This mail is then forwarded to hi@bobworld.example, an address that identifies the victim, Bob.

The receiving server for bobworld.example then sends a DMARC report to the attacker, identifying bobworld.example as the sender of the report. The attacker now knows that totallynotbob@gmail.com is actually Bob.

Example 3:

malicious@attacker.example -> potato on InnocentWebCommunity.example, via web form -> contact@charliesplace.example

The attacker sends an email to potato, a user on InnocentWebCommunity, via the web form on InnocentWebCommunity.example. The email is sent by the web service to contact@charliesplace.example with its From header as malicious@attacker.example. potato is not an identifiable username and the malicious attacker wants to find out their real information.

The receiving server for charliesplace.example then sends a DMARC report to the attacker, identifying charliesplace.example as the sender of the report. The attacker now knows that potato is actually Charlie.

Attack 2: Forwarded Reflected Exfiltration using Aggregate DMARC

In this case, similar to Attack 1, the attacker arranges that an email is sent to the victim with the attacker’s domain as the From, and elects to receive DMARC reports for that domain.

The attack email must be sent to an address that does not personally identify the victim, but that forwards to an address with domain that does. The difference in Attack 2 as compared to Attack 1 is that this identifiable-domain address then forwards the email on again, to another provider that sends DMARC aggregate reports.

Since the personally-identifiable domain forwards the email, the IP and domain of its mailserver will become visible in DMARC reports sent by the ultimate receiving server back to the attacker.

This is the more complicated case, which is not relevant to as many email users as Attack 1, but has been mitigated by very few third-party email-service providers, if any.

Examples

All the examples for Attack 1 apply for Attack 2, when a final forward to example@gmail.com is performed by the personally-identified domain.

Are you vulnerable?

  • If your email addresses are only under generic domains, you are not vulnerable.
  • If all the servers processing email for you do not send DMARC reports or participate in feedback loops, you are not vulnerable. (but you can’t assume things will stay this way)
  • If you do not forward emails from a custom domain, and the servers processing mail for you anonymize DMARC reports, you are not vulnerable. (This applies to most, but not all, custom domain users on commercial mail hosts, and means Attack 1 is less potent than it might otherwise be.)

Otherwise, if none of the above apply, you may be vulnerable. You should review your email pipelines and ensure that the appropriate mitigations, some of which are detailed below, have been applied.

What can you do about this?

The only foolproof method to avoid these attacks is to use a generic domain (e.g. hotmail, aol, etc.) to receive your email, and not to forward any mail to a custom domain once it is received.

If you use your own domain, do not automatically forward mail from this domain. If you do need to forward mail in this situation, make sure you forward to a server you control and trust not to send DMARC reports.

If you control your own mail servers, disable all DMARC reporting or delivery feedback loop mechanisms.

If you don’t control the servers that receive your mail, switch to ones you do control or ask the server owner to disable all DMARC reporting and to not participate in delivery feedback loop mechanisms.

Commentary

The fact that this perfectly simple and obvious privacy leak has been standardised and adopted by the vast majority of email service providers (even many of those that cry about ‘protecting your privacy’) is rather mind-boggling.

This speaks to the general state of affairs around email privacy - it’s basically an impossible task to fully protect yourself; without significant behaviour changes, and deep knowledge of the operation of all components in the email pipeline there are just so many holes that patching one makes little difference.

My opinion is that there is a fundamental problem in the ecosystem: email standards are too complicated, and have not been designed with privacy in mind. When problems with existing standards have arisen, the solution seems to have been adding additional complexity layers on top of the existing ones.

Unfortunately this is a problem of interoperability as email is a federated system which has developed over time; standards developers have attempted to keep backwards-compatibility when writing new standards but the negative effects of some of these decisions look rather more evident in hindsight than they might have appeared at the time.

I don’t have any solution for this problem, nor do I believe switching to some alternative ecosystem is a good idea: if email didn’t work ok, then it wouldn’t be so successful as it is. Nevertheless, it’s sad that things are as bad as they are.

❌
❌