Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Public Figures Keep Leaving Their Venmo Accounts Public

By: Nick Heer
27 March 2025 at 04:00

The high-test idiocy of a senior U.S. politician inviting a journalist to an off-the-record chat planning an attack on Yemen, killing over thirty people and continuing a decade of war, seems to have popularized a genre of journalism dedicated to the administration’s poor digital security hygiene. Some of these articles feel less substantial; others suggest greater crimes. One story feels like deja vu.

Dhruv Mehrotra and Tim Marchman, Wired:

The Venmo account under [Mike] Waltz’s name includes a 328-person friend list. Among them are accounts sharing the names of people closely associated with Waltz, such as [Walker] Barrett, formerly Waltz’s deputy chief of staff when Waltz was a member of the House of Representatives, and Micah Thomas Ketchel, former chief of staff to Waltz and currently a senior adviser to Waltz and President Donald Trump.

[…]

One of the most notable appears to belong to [Susie] Wiles, one of Trump’s most trusted political advisers. That account’s 182-person friend list includes accounts sharing the names of influential figures like Pam Bondi, the US attorney general, and Hope Hicks, Trump’s former White House communications director.

In 2021, reporters for Buzzfeed News found Joe Biden’s Venmo account and his contacts. Last summer, the same Wired reporters plus Andrew Couts found J.D. Vance’s and, in February, reporters for the American Prospect found Pete Hegseth’s. It remains a mystery to me why one of the most popular U.S. payment apps is this public.

⌥ Permalink

The War on Encryption Is Dangerous

By: Nick Heer
25 March 2025 at 23:58

Meredith Whittaker, president of Signal — which has recently been in the news — in an op-ed for the Financial Times:

The UK is part and parcel of a dangerous trend that threatens the cyber security of our global infrastructures. Legislators in Sweden recently proposed a law that would force communication providers to build back door vulnerabilities. France is poised to make the same mistake when it votes on the inclusion of “ghost participants” in secure conversations via back doors. “Chat control” legislation haunts Brussels.

There is some good news: French legislators ultimately rejected this provision.

⌥ Permalink

Facebook to Stop Targeting Ads at U.K. Woman After Legal Fight

By: Nick Heer
25 March 2025 at 03:05

Grace Dean, BBC News:

Ms O’Carroll’s lawsuit argued that Facebook’s targeted advertising system was covered by the UK’s definition of direct marketing, giving individuals the right to object.

Meta said that adverts on its platform could only be targeted to groups of a minimum size of 100 people, rather than individuals, so did not count as direct marketing. But the Information Commissioner’s Office (ICO) disagreed.

“Organisations must respect people’s choices about how their data is used,” a spokesperson for the ICO said. “This means giving users a clear way to opt out of their data being used in this way.”

Meta, in response, says “no business can be mandated to give away its services for free”, a completely dishonest way to interpret the ICO’s decision. There is an obvious difference between advertising and personalized advertising. To pretend otherwise is nonsense. Sure, personalized advertising makes Meta more money than non-personalized advertising, but that is an entirely different problem. Meta can figure it out. Or it can be a big soggy whiner about it.

⌥ Permalink

TikTok Loses U.S. Divest-or-Ban Appeal

By: Nick Heer
7 December 2024 at 05:13

Liv McMahon and Lily Jamali, BBC News:

TikTok’s bid to overturn a law which would see it banned or sold in the US from early 2025 has been rejected.

[…]

TikTok says it will now take its fight to the US Supreme Court, the country’s highest legal authority.

The court’s opinion (PDF) is not particularly long. As this is framed as a question of national security, the court gives substantial deference to the government’s assessment of TikTok’s threat. It also views the legislation passed earlier this year to limit data brokers as a complementary component of this TikTok divest-or-ban law.

I still do not find this argument particularly compelling. There is still too much dependence on classified information and too little public evidence. A generous interpretation of this is the court knows something I do not, and perhaps this is completely justified. But who knows? The paranoia over this app is leaking but the proof is not.

Donald Trump’s victory in the 2024 US Presidential Election may also present a lifeline for the app.

Despite unsuccessfully attempting to ban TikTok during his first term in 2020, he said in the run-up to the November elections he would not allow the ban on TikTok to take effect.

I would be shocked if the incoming administration remains committed to overturning this ban, and not just because of its historically flaky reputation. This very decision references the actions of the first Trump presidency, though it owes more to the more tailored policies of the Biden administration.

If the U.S. Supreme Court does not stay this order and TikTok’s U.S. operations are not jettisoned from its global business, the ban will go into effect the day before Trump’s inauguration.

⌥ Permalink

⌥ Ambiguity and Trust in Apple Intelligence

By: Nick Heer
5 December 2024 at 04:59

Spencer Ackerman has been a national security reporter for over twenty years, and was partially responsible for the Guardian’s coverage of NSA documents leaked by Edward Snowden. He has good reason to be skeptical of privacy claims in general, and his experience updating his iPhone made him worried:

Recently, I installed Apple’s iOS 18.1 update. Shame on me for not realizing sooner that I should be checking app permissions for Siri — which I had thought I disabled as soon as I bought my device — but after installing it, I noticed this update appeared to change Siri’s defaults.

Apple has a history with changing preferences and dark patterns. This is particularly relevant in the case of the iOS 18.1 update because it was the one with Apple Intelligence, which creates new ambiguity between what is happening on-device and what goes to a server farm somewhere.

Allen Pike:

While easy tasks are handled by their on-device models, Apple’s cloud is used for what I’d call moderate-difficulty work: summarizing long emails, generating patches for Photos’ Clean Up feature, or refining prose in response to a prompt in Writing Tools. In my testing, Clean Up works quite well, while the other server-driven features are what you’d expect from a medium-sized model: nothing impressive.

Users shouldn’t need to care whether a task is completed locally or not, so each feature just quietly uses the backend that Apple feels is appropriate. The relative performance of these two systems over time will probably lead to some features being moved from cloud to device, or vice versa.

It would be nice if it truly did not matter — and, for many users, the blurry line between the two is probably fine. Private Cloud Compute seems to be trustworthy. But I fully appreciate Ackerman’s worries. Someone in his position necessarily must understand what is being stored and processed in which context.

However, Ackerman appears to have interpreted this setting change incorrectly:

I was alarmed to see that even my secure communications apps, like Proton and Signal, were toggled by default to “Learn from this App” and enable some subsidiary functions. I had to swipe them all off.

This setting was, to Ackerman, evidence of Apple “uploading your data to its new cloud-based AI project”, which is a reasonable assumption at a glance. Apple, like every technology company in the past two years, has decided to loudly market everything as being connected to its broader A.I. strategy. In launching these features in a piecemeal manner, though, it is not clear to a layperson which parts of iOS are related to Apple Intelligence, let alone where those interactions are taking place.

However, this particular setting is nearly three years old and unrelated to Apple Intelligence. This is related to Siri Suggestions which appear throughout the system. For example, the widget stack on my home screen suggests my alarm clock app when I charge my iPhone at night. It suggests I open the Microsoft Authenticator app on weekday mornings. When I do not answer the phone for what is clearly a scammer, it suggests I return the missed call. It is not all going to be gold.

Even at the time of its launch, its wording had the potential for confusion — something Apple has not clarified within the Settings app in the intervening years — and it seems to have been enabled by default. While this data may play a role in establishing the “personal context” Apple talks about — both are part of the App Intents framework — I do not believe it is used to train off-device Apple Intelligence models. However, Apple says this data may leave the device:

Your personal information — which is encrypted and remains private — stays up to date across all your devices where you’re signed in to the same Apple Account. As Siri learns about you on one device, your experience with Siri is improved on your other devices. If you don’t want Siri personalization to update across your devices, you can disable Siri in iCloud settings. See Keep what Siri knows about you up to date on your Apple devices.

While I believe Ackerman is incorrect about the setting’s function and how Apple handles its data, I can see how he interpreted it that way. The company is aggressively marketing Apple Intelligence, even though it is entirely unclear which parts of it are available, how it is integrated throughout the company’s operating systems, and which parts are dependent on off-site processing. There are people who really care about these details, and they should be able to get answers to these questions.

All of this stuff may seem wonderful and novel to Apple and, likely, many millions of users. But there are others who have reasonable concerns. Like any new technology, there are questions which can only be answered by those who created it. Only Apple is able to clear up the uncertainty around Apple Intelligence, and I believe it should. A cynical explanation is that this ambiguity is all deliberate because Apple’s A.I. approach is so much slower than its competitors and, so, it is disincentivized from setting clear boundaries. That is possible, but there is plenty of trust to be gained by being upfront now. Americans polled by Pew Research and Gallup have concerns about these technologies. Apple has repeatedly emphasized its privacy bonafides. But these features remain mysterious and suspicious for many people regardless of how much a giant corporation swears it delivers “stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency”.

All of that is nice, I am sure. Perhaps someone at Apple can start the trust-building by clarifying what the Siri switch does in the Settings app, though.

U.S. Regulators Propose Reigning in Data Brokers

By: Nick Heer
4 December 2024 at 00:19

Out of the U.S. today comes a slew of new proposed restrictions against data brokers and their creepy practices.

The Consumer Financial Protection Bureau:

[…] The proposed rule would limit the sale of personal identifiers like Social Security Numbers and phone numbers collected by certain companies and make sure that people’s financial data such as income is only shared for legitimate purposes, like facilitating a mortgage approval, and not sold to scammers targeting those in financial distress. The proposal would make clear that when data brokers sell certain sensitive consumer information they are “consumer reporting agencies” under the Fair Credit Reporting Act (FCRA), requiring them to comply with accuracy requirements, provide consumers access to their information, and maintain safeguards against misuse.

The Federal Trade Commission:

The Federal Trade Commission will prohibit data broker Mobilewalla, Inc. from selling sensitive location data, including data that reveals the identity of an individual’s private home, to settle allegations the data broker sold such information without taking reasonable steps to verify consumers’ consent.

And also the Federal Trade Commission:

The Federal Trade Commission is taking action against Gravy Analytics Inc. and its subsidiary Venntel Inc. for unlawfully tracking and selling sensitive location data from users, including selling data about consumers’ visits to health-related locations and places of worship.

Both of the proposed FTC orders require these businesses to “maintain a sensitive location data program designed to develop a list of sensitive locations and prevent the use, sale, license, transfer, sharing, or disclosure of consumers’ visits to those locations”. These include, for example and in addition to those in the above quotes, shelters, labour union offices, correctional facilities, and military installations. This order was previewed last month in Wired.

As usual, I am conflicted about these policies. While they are yet another example of Lina Khan’s FTC and other government bureaucrats cracking down on individually threatening data brokers, it would be far better for everyone if this were not handled on a case-by-case basis. These brokers have already caused a wealth of damage around the world, and only they are being required to stop. Other players in the rest of the data broker industry will either self-govern or hope they do not fall into the FTC’s crosshairs, and if you believe the former is more likely, you have far greater faith in already-shady businesses than I do.

There is another wrench in these proposals: we are less than two months away from a second Trump presidency, and the forecast for the CFPB looks unfriendly. It was kneecapped during the first administration and it is on the chopping block for those overseeing a advisory committee masquerading as a government agency. The future of the FTC is more murky, with some indicators it will continue its current path — albeit from a Republican-skewed perspective — while others suggest a reversal.

The centring of the U.S. in the digital activity of a vast majority of us gives it unique power on privacy — power it has, so far, used in only very small doses. The future of regulatory agencies like these has relevance to all of us.

⌥ Permalink

‘Surveilled’ Documents Ronan Farrow’s Reporting on Spyware

By: Nick Heer
27 November 2024 at 05:39

Ronan Farrow, the New Yorker:

Decisions by the White House and by Republican lawmakers about spyware will have implications across a variety of policy areas that Trump and his associates are upending and that reach far beyond Washington. In recent years, an array of states, including Texas, Florida, and California have reportedly purchased spyware and other surveillance technologies; legislators and regulators will dictate whether that trend continues. Since the fall of Roe v. Wade, at least two states have already used private personal data to prosecute people for getting abortions. That practice could expand with more widespread and affordable access to this technology.

This article appears to have been timed to coincide with the release of a new documentary on HBO, showing Farrow reporting out stories on NSO Group and other commercial spyware makers. It is not the most substantive piece and I think that plus the headline — “The Technology the Trump Administration Could Use to Hack Your Phone” — is more distracting than it is illuminating. U.S. administrations have, since George W. Bush, used terrorism as a means of hand-waving away civil liberties protections, including domestic spying. Barack Obama’s administration famously killed U.S. citizens without trial, an action which remains shocking to me to this day regardless of who carried it out. In his first administration, Donald Trump compromised the legitimacy of all manner of domestic and foreign politics.

So, to the question of whether the U.S. would begin using fancy spyware on citizens’ phones under any administration, the answer seems more like a question of when and not if. It is just one more tool of a long series of violations. The next Trump administration seems unlikely to be more restrained than the first but, when this happens, I bet it becomes part of the churn-and-burn media cycle. It will barely register except to those who already find this sort of stuff disturbing.

By the way, the documentary itself is fine. It is only about an hour long and is mostly a behind-the-scenes look at the reporting. I am not sure that there is anything new-for-2024 within. Farrow’s New Yorker articles about the subject are far more illuminating.

⌥ Permalink

X Has Altered the Deal

By: Nick Heer
19 October 2024 at 03:04

X on Wednesday announced a new set of terms, something which is normally a boring and staid affair. But these are a doozy:

Here’s a high-level recap of the primary changes that go into effect on November 15, 2024. You may see an in-app notice about these updates as well.

  • Governing law and forum changes: For users residing outside of the European Union, EFTA States, and the United Kingdom, we’ve updated the governing law and forum for lawsuits to Texas as specified in our terms. […]

Specifically, X says “disputes […] will be brought exclusively in the U.S. District Court for the Northern District of Texas or state courts located in Tarrant County, Texas, United States”. X’s legal address is on a plot of land shared with SpaceX and the Boring Company near Bastrop, which is in the Western District. This particular venue is notable as the federal judge handling current X litigation in the Northern District owns Tesla stock and has not recused himself in X’s suit against Media Matters, despite stepping aside on a similar case because of a much smaller investment in Unilever. The judge, Reed O’Connor, is a real piece of work from the Federalist Society who issues reliably conservative decisions and does not want that power undermined.

An investment in Tesla does not necessarily mean a conflict of interest with X, an ostensibly unrelated company — except it kind of does, right? This is the kind of thing the European Commission is trying to figure out: are all of these different businesses actually related because they share the same uniquely outspoken and influential figurehead? Musk occupies such a particularly central role in all these businesses and it is hard to disentangle him from their place in our society. O’Connor is not the only judge in the district, but it is notable the company is directing legal action to that venue.

But X is only too happy to sue you in any court of its choosing.

Another of the X terms updates:

  • AI and machine learning clarifications: We’ve added language to our Privacy Policy to clarify how we may use the information you share to train artificial intelligence models, generative or otherwise.

This is rude. It is a “clarifi[cation]” described in vague terms, and what it means is that users will no longer be able to opt out of their data being used to train Grok or any other artificial intelligence product. This appears to also include images and video, posts in private accounts and, if I am reading this right, direct messages.

Notably, Grok is developed by xAI, which is a completely separate company from X. See above for how Musk’s companies all seem to bleed together.

  • Updates to reflect how our products and services work: We’ve incorporated updates to better reflect how our existing and upcoming products, features, and services work.

I do not know what this means. There are few product-specific changes between the old and new agreements. There are lots — lots — of new ways X wants to say it is not responsible for anything at all. There is a whole chunk which effectively replicates the protections of Section 230 of the CDA, you now need written permission from X to transfer your account to someone else, and X now spells out its estimated damages from automated traffic: $15,000 USD per million posts every 24 hours.

Oh, yeah, and X is making block work worse:

If your posts are set to public, accounts you have blocked will be able to view them, but they will not be able to engage (like, reply, repost, etc.).

The block button is one of the most effective ways to improve one’s social media experience. From removing from your orbit people who you never want to hear from for even mundane reasons, to reducing the ability for someone to stalk or harass, its expected action is vital. This sucks. I bet the main reason this change was made is because Musk is blocked by a lot of people.

All of these changes seem designed to get rid of any remaining user who is not a true believer. Which brings us to today.

Sarah Perez, TechCrunch:

Social networking startup Bluesky, which just reported a gain of half a million users over the past day, has now soared into the top five apps on the U.S. App Store and has become the No. 2 app in the Social Networking category, up from No. 181 a week ago, according to data from app intelligence firm Appfigures. The growth is entirely organic, we understand, as Appfigures confirmed the company is not running any App Store Search Ads.

As of writing, Bluesky is the fifth most popular free app in the Canadian iOS App Store, and the second most popular free app in the Social Networking category. Threads is the second most popular free app, and the most popular in the Social Networking category.

X is number 74 on the top free apps list. It remains classified as “News” in the App Store because it, like Twitter, has always compared poorly against other social media apps.

⌥ Permalink

Correcting the Record on Recording

By: Nick Heer
16 October 2024 at 22:47

Dominic Wellington responded thoughtfully to speculation, including my own that a device management key for suppressing screen recording alerts in MacOS Sequoia was added in part because of employee monitoring software:

[…] I know perfectly well that these sorts of tools exist and are deployed by companies, but I suspect they are more prevalent in the sorts of lower-paid jobs that don’t rate fancy expensive Macs. This is why I don’t think employee surveillance (or test proctoring, which is Nick Heer’s other example) can be sufficient explanation for Apple walking back the frequency of this notification. Meanwhile, Zoom et al are near-universal on corporate Macs, and are going to be correspondingly closer to top of mind for administrators of Mac fleets.

This is a fair and considered response, and I think Wellington is right. Even though screen recording capabilities are widespread in employee surveillance products, I do not know that they are very popular. I oversold the likelihood of this being a reflection of that software.

⌥ Permalink

Screen Recording Alert Changes in MacOS 15.1 Beta

By: Nick Heer
15 October 2024 at 22:51

Apple in the release notes for MacOS 15.1 beta:

Applications using our deprecated content capture technologies now have enhanced user awareness policies. Users will see fewer dialogs if they regularly use apps in which they have already acknowledged and accepted the risks.

John Gruber:

Why in the world didn’t Apple take regular use of a screen-recording app into account all along?

Benjamin Brooks:

I think this is the question you ask when you have not used a Corporate Mac in the last 4-5 years. For those who are, you know that companies install applications which take screenshots and screen recordings of certain or all activities being done on the Mac. You know, for security.

When users began noticing the screen recording permissions prompt over the summer, I remember lots of people speculating Apple added it because of possible spyware or domestic violence behaviour. That is a plausible explanation.

But Brooks’ keen observation is something I, in hindsight, should have also considered, and I am kicking myself for forgetting about the possibility. I now remember linking to things like employee surveillance software and online test proctoring — applications which monitor users’ screens effectively by force, something one will agree to unless they want to change jobs or not complete an exam. I believe this is supported by — and casts a new light upon — a device management key available to system administrators for suppressing those permissions prompts.

⌥ Permalink

WSJ: U.S. Wiretap Systems Targeted in China-Linked Hack

By: Nick Heer
9 October 2024 at 02:51

Sarah Krouse, Dustin Volz, Aruna Viswanatha, and Robert McMillan, Wall Street Journal (probably paywalled; sorry):

A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers, potentially accessing information from systems the federal government uses for court-authorized network wiretapping requests.

For months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data, according to people familiar with the matter, which amounts to a major national security risk. The attackers also had access to other tranches of more generic internet traffic, they said.

Zack Whittaker, TechCrunch:

The 30-year-old law that set the stage for recent backdoor abuse is the Communications Assistance for Law Enforcement Act, or CALEA, which became law in 1994 at a time when cell phones were a rarity and the internet was still in its infancy.

CALEA requires that any “communications provider,” such as a phone company or internet provider, must provide the government all necessary assistance to access a customer’s information when presented with a lawful order. In other words, if there is a means to access a customer’s data, the phone companies and internet providers must provide it.

Bruce Schneier:

For years, the security community has pushed back against these backdoors, pointing out that the technical capability cannot differentiate between good guys and bad guys. And here is one more example of a backdoor access mechanism being targeted by the “wrong” eavesdroppers.

Riana Pfefferkorn:

It is not the ‘90s anymore, when CALEA got enacted, the law requiring telecom wiretappability for law enforcement. China and Russia and DPRK are formidable cyber foes now. DOJ, FBI, etc. want to change CALEA so that encrypted apps like Signal or WhatsApp aren’t exempt from it anymore. But this hack shows that if anything, the law needs to change in the *other* direction. The hack needs to be a wake-up call to law enforcement that as long as they keep opposing encryption for communications, they’re enabling China to smack us in the face with our own hand while saying “stop hitting yourself!”

According to a 2016 paper from Public Safety Canada, “Australia, the U.S., the UK and many other European nations require CSPs [Communications Service Providers] to have an interception capability”; it also notes Canada does not. Such a requirement is understandable from an investigative perspective. But, as Pfefferkorn says, capabilities like these have been exploited before, and it will happen again. These are big targets and there are no safe backdoors.

That brings me — for the second time today — to the need for comprehensive privacy legislation basically everywhere but, in particular, in the United States, the hub of the world’s communications. Protecting private data would dramatically curtail this kind of access violation by removing backdoors, restrict one aspect of TikTok panic, and reduce the exploitation of our behavioural data by creepy ad tech businesses. It is not a panacea and I am sure there are worrisome side effects for law enforcement, but it would likely be more effective than tackling these problems on an individual basis.

⌥ Permalink

Apple as Godzilla

By: Nick Heer
9 October 2024 at 00:16

I have to say, it is quite an odd thing to be listening to a podcast and hear one’s own name. This recently happened to me on the latest episode of “Upgrade”. It feels like I am about to be called to the principal’s office or something. But I was not.

Myke Hurley and Jason Snell discussed an article I wrote about the more granular control available to users in iOS 18 when apps request access to their contacts. (My piece is a response to a New York Times story; Snell also linked to both with some commentary.) Their chat centres Apple’s scale and influence. Even little decisions the company makes are capable of transforming entire industries.

Hurley raises an apt comparison to App Tracking Transparency, which is exactly what I was thinking about when I wrote my piece. I am similarly unsympathetic to corporate empires built on illicitly obtained data. If you cannot make money when users are given a choice to consent, your business model probably sucks. But I do think it is concerning how powerful both of the major players were in the aftermath of that announcement: Meta, for hoarding behavioural data on billions of people; and Apple, for its ability to give users options.

I see parallels to Google’s power over the web. The near-uniform layout and structure of webpages is thanks to Google’s specific suggestions for improving the likelihood of ranking higher. The main difference is Google’s suggestions are sort of optional; if a website’s owner does not care much about search traffic, they can do whatever they want. The prompts on iOS, on the other hand, are baked into the system.

As Snell says, these apps “have to make the case” for granting permission. I do not think that is such a bad thing, and I am amenable to their suggestion of Apple’s built-in apps being placed on a level playing field. I think a lot of this would be more predictable if privacy laws were stronger. A basic level of privacy protections should not be a competitive advantage, nor should users be required to navigate either unceasing permissions dialogs or terms of service agreements to understand the myriad ways their personal information is being exploited — because their personal information should not be exploited.

⌥ Permalink

Third-Party Cookies Have Got to Go

By: Nick Heer
30 July 2024 at 02:26

Anthony Chavez, of Google:

[…] Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time. We’re discussing this new path with regulators, and will engage with the industry as we roll this out.

Oh good — more choices.

Hadley Beeman, of the W3C’s Technical Architecture Group:

Third-party cookies are not good for the web. They enable tracking, which involves following your activity across multiple websites. They can be helpful for use cases like login and single sign-on, or putting shopping choices into a cart — but they can also be used to invisibly track your browsing activity across sites for surveillance or ad-targeting purposes. This hidden personal data collection hurts everyone’s privacy.

All of this data collection only makes sense to advertisers in the aggregate, but it only works because of specifics: specific users, specific webpages, and specific actions. Privacy Sandbox is imperfect but Google could have moved privacy forward by ending third-party cookies in the world’s most popular browser.

⌥ Permalink

⌥ Engineering Consent

By: Nick Heer
30 July 2024 at 02:02

Anthony Ha, of TechCrunch, interviewed Jean-Paul Schmetz, CEO of Ghostery, and I will draw your attention to this exchange:

AH I want to talk about both of those categories, Big Tech and regulation. You mentioned that with GDPR, there was a fork where there’s a little bit of a decrease in tracking, and then it went up again. Is that because companies realized they can just make people say yes and consent to tracking?

J-PS What happened is that in the U.S., it continued to grow, and in Europe, it went down massively. But then the companies started to get these consent layers done. And as they figured it out, the tracking went back up. Is there more tracking in the U.S. than there is in Europe? For sure.

AH So it had an impact, but it didn’t necessarily change the trajectory?

J-PS It had an impact, but it’s not sufficient. Because these consent layers are basically meant to trick you into saying yes. And then once you say yes, they never ask again, whereas if you say no, they keep asking. But luckily, if you say yes, and you have Ghostery installed, well, it doesn’t matter, because we block it anyway. And then Big Tech has a huge advantage because they always get consent, right? If you cannot search for something in Google unless you click on the blue button, you’re going to give them access to all of your data, and you will need to rely on people like us to be able to clean that up.

The TechCrunch headline summarizes this by saying “regulation won’t save us from ad trackers”, but I do not think that is a fair representation of this argument. What it sounds like, to me, is that regulations should be designed more effectively.

The E.U.’s ePrivacy Directive and GDPR have produced some results: tracking is somewhat less pervasive, people have a right to data access and portability, and businesses must give users a choice. That last thing is, as Schmetz points out, also its flaw, and one it shares with something like App Tracking Transparency on iOS. Apps affected by the latter are not permitted to keep asking if tracking is denied, but they do similarly rely on the assumption a user can meaningfully consent to a cascading system of trackers.

In fact, the similarities and differences between cookie banner laws and App Tracking Transparency are considerable. Both require some form of consent mechanism immediately upon accessing a website or an app, assuming a user can provide that choice. Neither can promise tracking will not occur should a user deny the request. Both are interruptive.

But cookie consent laws typically offer users more information; many European websites, for example, enumerate all their third-party trackers, while App Tracking Transparency gives users no visibility into which trackers will be allowed. The latter choice is remembered forever unless a user removes and reinstalls the app, while websites can ask you for cookie consent on each visit. Perhaps the latter may sometimes be a consequence of using Safari; it is hard to know.

App Tracking Transparency also has a system-wide switch to opt out of all third-party tracking. There used to be something similar in web browsers, but compliance was entirely optional. Its successor effort, Global Privacy Control, is sadly not as widely supported as it ought to be, but it appears to have legal teeth.

Both of these systems have another important thing in common: neither are sufficiently protective of users’ privacy because they burden individuals with the responsibility of assessing something they cannot reasonably comprehend. It is patently ridiculous to put the responsibility on individuals to mitigate a systemic problem like invasive tracking schemes.

There should be a next step to regulations like these because user tracking is not limited to browsers where Ghostery can help — if you know about it. A technological response is frustrating and it is unclear to me how effective it is on its own. This is clearly not a problem only regulation can solve but neither can browser extensions. We need both.

⌥ The Fight for End-to-End Encryption Is Worldwide

By: Nick Heer
22 June 2024 at 01:45

Since 2022, the European Parliament has been trying to pass legislation requiring digital service providers to scan for and report CSAM as it passes through their services.

Giacomo Zandonini, Apostolis Fotiadis, and Luděk Stavinoha, Balkan Insight, with a good summary in September:

Welcomed by some child welfare organisations, the regulation has nevertheless been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications from prying eyes.

[…]

The proposed regulation is excessively “influenced by companies pretending to be NGOs but acting more like tech companies”, said Arda Gerkens, former director of Europe’s oldest hotline for reporting online CSAM.

This is going to require a little back-and-forth, and I will pick up the story with quotations from Matthew Green’s introductory remarks to a panel before the European Internet Services Providers Association in March 2023:

The only serious proposal that has attempted to address this technical challenge was devised — and then subsequently abandoned — by Apple in 2021. That proposal aimed only at detecting known content using a perceptual hash function. The company proposed to use advanced cryptography to “split” the evaluation of hash comparisons between the user’s device and Apple’s servers: this ensured that the device never received a readable copy of the hash database.

[…]

The Commission’s Impact Assessment deems the Apple approach to be a success, and does not grapple with this failure. I assure you that this is not how it is viewed within the technical community, and likely not within Apple itself. One of the most capable technology firms in the world threw all their knowledge against this problem, and were embarrassed by a group of hackers: essentially before the ink was dry on their proposal.

Daniel Boffey, the Guardian, in May 2023:

Now leaked internal EU legal advice, which was presented to diplomats from the bloc’s member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year.

The European Parliament in a November 2023 press release:

In the adopted text, MEPs excluded end-to-end encryption from the scope of the detection orders to guarantee that all users’ communications are secure and confidential. Providers would be able to choose which technologies to use as long as they comply with the strong safeguards foreseen in the law, and subject to an independent, public audit of these technologies.

Joseph Menn, Washington Post, in March, reporting on the results of a European court ruling:

While some American officials continue to attack strong encryption as an enabler of child abuse and other crimes, a key European court has upheld it as fundamental to the basic right to privacy.

[…]

The court praised end-to-end encryption generally, noting that it “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.”

This is not directly about the proposed CSAM measures, but it is precedent for European regulators to follow.

Natasha Lomas, TechCrunch, this week:

The most recent Council proposal, which was put forward in May under the Belgian presidency, includes a requirement that “providers of interpersonal communications services” (aka messaging apps) install and operate what the draft text describes as “technologies for upload moderation”, per a text published by Netzpolitik.

Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

Meredith Whittaker, CEO of Signal, issued a PDF statement criticizing the proposal:

Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.

Patrick Breyer, of Germany’s Pirate Party:

Only Germany, Luxembourg, the Netherlands, Austria and Poland are relatively clear that they will not support the proposal, but this is not sufficient for a “blocking minority”.

Ella Jakubowska on X:

The exact quote from [Věra Jourová] the Commissioner for Values & Transparency: “the Commission proposed the method or the rule that even encrypted messaging can be broken for the sake of better protecting children”

Věra Jourová on X, some time later:

Let me clarify one thing about our draft law to detect online child sexual abuse #CSAM.

Our proposal is not breaking encryption. Our proposal preserves privacy and any measures taken need to be in line with EU privacy laws.

Matthew Green on X:

Coming back to the initial question: does installing surveillance software on every phone “break encryption”? The scientist in me squirms at the question. But if we rephrase as “does this proposal undermine and break the *protections offered by encryption*”: absolutely yes.

Maïthé Chini, the Brussels Times:

It was known that the qualified majority required to approve the proposal would be very small, particularly following the harsh criticism of privacy experts on Wednesday and Thursday.

[…]

“[On Thursday morning], it soon became clear that the required qualified majority would just not be met. The Presidency therefore decided to withdraw the item from today’s agenda, and to continue the consultations in a serene atmosphere,” a Belgian EU Presidency source told The Brussels Times.

That is a truncated history of this piece of legislation: regulators want platform operators to detect and report CSAM; platforms and experts say that will conflict with security and privacy promises, even if media is scanned prior to encryption. This proposal may be specific to the E.U., but you can find similar plans to curtail or invalidate end-to-end encryption around the world:

I selected English-speaking areas because that is the language I can read, but I am sure there are more regions facing threats of their own.

We are not served by pretending this threat is limited to any specific geography. The benefits of end-to-end encryption are being threatened globally. The E.U.’s attempt may have been pushed aside for now, but another will rise somewhere else, and then another. It is up to civil rights organizations everywhere to continue arguing for the necessary privacy and security protections offered by end-to-end encryption.

Inside the Copilot Recall ‘Disaster’

By: Nick Heer
3 June 2024 at 17:53

Kevin Beaumont:

At a surface level, it [Recall] is great if you are a manager at a company with too much to do and too little time as you can instantly search what you were doing about a subject a month ago.

In practice, that audience’s needs are a very small (tiny, in fact) portion of Windows userbase — and frankly talking about screenshotting the things people in the real world, not executive world, is basically like punching customers in the face. The echo chamber effect inside Microsoft is real here, and oh boy… just oh boy. It’s a rare misfire, I think.

Via Eric Schwarz:

This fact that this feature is basically on by default and requires numerous steps to disable is going to create a lot of problems for people, especially those who click through every privacy/permission screen and fundamentally don’t know how their computer actually operates — I’ve counted way too many instances where I’ve had to help people find something and they have no idea where anything lives in their file system (mostly work off the Desktop or Downloads folders). How are they going to even grapple with this?

The problems with Recall remind me of the minor 2017 controversy around “brassiere” search results in Apple’s Photos app. Like Recall, it is entirely an on-device process with some security and privacy protections. In practice, automatically cataloguing all your photos which show a bra is kind of creepy, even if it is being done only with your own images on your own phone.

⌥ Permalink

⌥ Killing Time for TikTok

By: Nick Heer
23 May 2024 at 16:24

Finally. The government of the United States finally passed a law that would allow it to force the sale of, or ban, software and websites from specific countries of concern. The target is obviously TikTok — it says so right in its text — but crafty lawmakers have tried to add enough caveats and clauses and qualifiers to, they hope, avoid it being characterized as a bill of attainder, and to permit future uses. This law is very bad. It is an ineffective and illiberal position that abandons democratic values over, effectively, a single app. Unfortunately, TikTok panic is a very popular position in the U.S. and, also, here in Canada.

The adversaries the U.S. is worried about are the “covered nationsdefined in 2018 to restrict the acquisition by the U.S. of key military materials from four countries: China, Iran, North Korea, and Russia. The idea behind this definition was that it was too risky to procure magnets and other important components of, say, missiles and drones from a nation the U.S. considers an enemy, lest those parts be compromised in some way. So the U.S. wrote down its least favourite countries for military purposes, and that list is now being used in a bill intended to limit TikTok’s influence.

According to the law, it is illegal for any U.S. company to make available TikTok and any other ByteDance-owned app — or any app or website deemed a “foreign adversary controlled application” — to a user in the U.S. after about a year unless it is sold to a company outside the covered countries, and with no more than twenty percent ownership stake from any combination of entities in those four named countries. Theoretically, the parent company could be based nearly anywhere in the world; practically, if there is a buyer, it will likely be from the U.S. because of TikTok’s size. Also, the law specifically exempts e-commerce apps for some reason.

This could be interpreted as either creating an isolated version specifically for U.S. users or, as I read it, moving the global TikTok platform to a separate organization not connected to ByteDance or China.1 ByteDance’s ownership is messy, though mostly U.S.-based, but politicians worried about its Chinese origin have had enough, to the point they are acting with uncharacteristic vigour. The logic seems to be that it is necessary for the U.S. government to influence and restrict speech in order to prevent other countries from influencing or restricting speech in ways the U.S. thinks are harmful. That is, the problem is not so much that TikTok is foreign-owned, but that it has ownership ties to a country often antithetical to U.S. interests. TikTok’s popularity might, it would seem, be bad for reasons of espionage or influence — or both.

Power

So far, I have focused on the U.S. because it is the country that has taken the first step to require non-Chinese control over TikTok — at least for U.S. users but, due to the scale of its influence, possibly worldwide. It could force a business to entirely change its ownership structure. So it may look funny for a Canadian to explain their views of what the U.S. ought to do in a case of foreign political interference. This is a matter of relevance in Canada as well. Our federal government raised the alarm on “hostile state-sponsored or influenced actors” influencing Canadian media and said it had ordered a security review of TikTok. There was recently a lengthy public inquiry into interference in Canadian elections, with a special focus on China, Russia, and India. Clearly, the popularity of a Chinese application is, in the eyes of these officials, a threat.

Yet it is very hard not to see the rush to kneecap TikTok’s success as a protectionist reaction to shaking the U.S. dominance of consumer technologies, as convincingly expressed by Paris Marx at Disconnect:

In Western discourses, China’s internet policies are often positioned solely as attempts to limit the freedoms of Chinese people — and that can be part of the motivation — but it’s a politically convenient explanation for Western governments that ignores the more important economic dimension of its protectionist approach. Chinese tech is the main competitor to Silicon Valley’s dominance today because China limited the ability of US tech to take over the Chinese market, similar to how Japan and South Korea protected their automotive and electronics industries in the decades after World War II. That gave domestic firms the time they needed to develop into rivals that could compete not just within China, but internationally as well. And that’s exactly why the United States is so focused not just on China’s rising power, but how its tech companies are cutting into the global market share of US tech giants.

This seems like one reason why the U.S. has so aggressively pursued a divestment or ban since TikTok’s explosive growth in 2019 and 2020. On its face it is similar to some reasons why the E.U. has regulated U.S. businesses that have, it argues, disadvantaged European competitors, and why Canadian officials have tried to boost local publications that have seen their ad revenue captured by U.S. firms. Some lawmakers make it easy to argue it is a purely xenophobic reaction, like Senator Tom Cotton, who spent an exhausting minute questioning TikTok’s Singaporean CEO Shou Zi Chew about where he is really from. But I do not think it is entirely a protectionist racket.

A mistake I have made in the past — and which I have seen some continue to make — is assuming those who are in favour of legislating against TikTok are opposed to the kinds of dirty tricks it is accused of on principle. This is false. Many of these same people would be all too happy to allow U.S. tech companies to do exactly the same. I think the most generous version of this argument is one in which it is framed as a dispute between the U.S. and its democratic allies, and anxieties about the government of China — ByteDance is necessarily connected to the autocratic state — spreading messaging that does not align with democratic government interests. This is why you see few attempts to reconcile common objections over TikTok with the quite similar behaviours of U.S. corporations, government arms, and intelligence agencies. To wit: U.S.-based social networks also suggest posts with opaque math which could, by the same logic, influence elections in other countries. They also collect enormous amounts of personal data that is routinely wiretapped, and are required to secretly cooperate with intelligence agencies. The U.S. is not authoritarian as China is, but the behaviours in question are not unique to authoritarians. Those specific actions are unfortunately not what the U.S. government is objecting to. What it is disputing, in a most generous reading, is a specifically antidemocratic government gaining any kind of influence.

Espionage and Influence

It is easiest to start by dismissing the espionage concerns because they are mostly misguided. The peek into Americans’ lives offered by TikTok is no greater than that offered by countless ad networks and data brokers — something the U.S. is also trying to restrict more effectively through a comprehensive federal privacy law. So long as online advertising is dominated by a privacy-hostile infrastructure, adversaries will be able to take advantage of it. If the goal is to restrict opportunities for spying on people, it is idiotic to pass legislation against TikTok specifically instead of limiting the data industry.

But the charge of influence seems to have more to it, even though nobody has yet shown that TikTok is warping users’ minds in a (presumably) pro-China direction. Some U.S. lawmakers described its danger as “theoretical”; others seem positively terrified. There are a few different levels to this concern: are TikTok users uniquely subjected to Chinese government propaganda? Is TikTok moderated in a way that boosts or buries videos to align with Chinese government views? Finally, even if both of these things are true, should the U.S. be able to revoke access to software if it promotes ideologies or viewpoints — and perhaps explicit propaganda? As we will see, it looks like TikTok sometimes tilts in ways beneficial — or, at least, less damaging — to Chinese government interests, but there is no evidence of overt government manipulation and, even if there were, it is objectionable to require it to be owned by a different company or ban it.

The main culprit, it seems, is TikTok’s “uncannily good” For You feed that feels as though it “reads your mind”. Instead of users telling TikTok what they want to see, it just begins showing videos and, as people use the app, it figures out what they are interested in. How it does this is not actually that mysterious. A 2021 Wall Street Journal investigation found recommendations were made mostly based on how long you spent watching each video. Deliberate actions — like sharing and liking — play a role, sure, but if you scroll past videos of people and spend more time with a video of a dog, it learns you want dog videos.

That is not so controversial compared to the opacity in how TikTok decides what specific videos are displayed and which ones are not. Why is this particular dog video in a user’s feed and not another similar one? Why is it promoting videos reflecting a particular political viewpoint or — so a popular narrative goes — burying those with viewpoints uncomfortable for its Chinese parent company? The mysterious nature of an algorithmic feed is the kind of thing into which you can read a story of your choosing. A whole bunch of X users are permanently convinced they are being “shadow banned” whenever a particular tweet does not get as many likes and retweets as they believe it deserved, for example, and were salivating at the thought of the company releasing its ranking code to solve a nonexistent mystery. There is a whole industry of people who say they can get your website to Google’s first page for a wide range of queries using techniques that are a mix of plausible and utterly ridiculous. Opaque algorithms make people believe in magic. An alarmist reaction to TikTok’s feed should be expected particularly as it was the first popular app designed around entirely recommended material instead of personal or professional connections. This has now been widely copied.

The mystery of that feed is a discussion which seems to have been ongoing basically since the 2018 merger of Musical.ly and TikTok, escalating rapidly to calls for it to be separated from its Chinese owner or banned altogether. In 2020, the White House attempted to force a sale by executive order. In response, TikTok created a plan to spin off an independent entity, but nothing materialized from this tense period.

March 2023 brought a renewed effort to divest or ban the platform. Chew, TikTok’s CEO, was called to a U.S. Congressional hearing and questioned for hours, to little effect. During that hearing, a report prepared for the Australian government was cited by some of the lawmakers, and I think it is a telling document. It is about eighty pages long — excluding its table of contents, appendices, and citations — and shows several examples of Chinese government influence on other products made by ByteDance. However, the authors found no such manipulation on TikTok itself, leading them to conclude:

In our view, ByteDance has demonstrated sufficient capability, intent, and precedent in promoting Party propaganda on its Chinese platforms to generate material risk that they could do the same on TikTok.

“They could do the same”, emphasis mine. In other words, if they had found TikTok was boosting topics and videos on behalf of the Chinese government, they would have said so — so they did not. The closest thing I could find to a covert propaganda campaign on TikTok anywhere in this report is this:

The company [ByteDance] tried to do the same on TikTok, too: In June 2022, Bloomberg reported that a Chinese government entity responsible for public relations attempted to open a stealth account on TikTok targeting Western audiences with propaganda”. [sic]

If we follow the Bloomberg citation — shown in the report as a link to the mysterious Archive.today site — the fuller context of the article by Olivia Solon disproves the impression you might get from reading the report:

In an April 2020 message addressed to Elizabeth Kanter, TikTok’s head of government relations for the UK, Ireland, Netherlands and Israel, a colleague flagged a “Chinese government entity that’s interested in joining TikTok but would not want to be openly seen as a government account as the main purpose is for promoting content that showcase the best side of China (some sort of propaganda).”

The messages indicate that some of ByteDance’s most senior government relations team, including Kanter and US-based Erich Andersen, Global Head of Corporate Affairs and General Counsel, discussed the matter internally but pushed back on the request, which they described as “sensitive.” TikTok used the incident to spark an internal discussion about other sensitive requests, the messages state.

This is the opposite conclusion to how this story was set up in the report. Chinese government public relations wanted to set up a TikTok account without any visible state connection and, when TikTok management found out about this, it said no. This Bloomberg article makes TikTok look good in the face of government pressure, not like it capitulates. Yes, it is worth being skeptical of this reporting. Yet if TikTok acquiesced to the government’s demands, surely the report would provide some evidence.

While this report for the Australian Senate does not show direct platform manipulation, it does present plenty of examples where it seems like TikTok may be biased or self-censoring. Its authors cite stories from the Washington Post and Vice finding posts containing hashtags like #HongKong and #FreeXinjiang returned results favourable to the official Chinese government position. Sometimes, related posts did not appear in search results, which is not unique to TikTok — platforms regularly use crude search term filtering to restrict discovery for lots of reasons. I would not be surprised if there were bias or self-censorship to blame for TikTok minimizing the visibility of posts critical of the subjugation of Uyghurs in China. However, it is basically routine for every social media product to be accused of suppression. The Markup found different types of posts on Instagram, for example, had captions altered or would no longer appear in search results, though it is unclear to anyone why that is the case. Meta said it was a bug, an explanation also offered frequently by TikTok.

The authors of the Australian report conducted a limited quasi-study comparing results for certain topics on TikTok to results on other social networks like Instagram and YouTube, again finding a handful of topics which favoured the government line. But there was no consistent pattern, either. Search results for “China military” on Instagram were, according to the authors, “generally flattering”, and X searches for “PLA” scarcely returned unfavourable posts. Yet results on TikTok for “China human rights”, “Tianamen”, and “Uyghur” were overwhelmingly critical of Chinese official positions.

The Network Contagion Research Institute published its own report in December 2023, similarly finding disparities between the total number of posts with specific hashtags — like #DalaiLama and #TiananmenSquare — on TikTok and Instagram. However, the study contained some pretty fundamental errors, as pointed out by — and I cannot believe I am citing these losers — the Cato Institute. The study’s authors compared total lifetime posts on each social network and, while they say they expect 1.5–2.0× the posts on Instagram because of its larger user base, they do not factor in how many of those posts could have existed before TikTok was even launched. Furthermore, they assume similar cultures and a similar use of hashtags on each app. But even benign hashtags have ridiculous differences in how often they are used on each platform. There are, as of writing, 55.3 million posts tagged “#ThrowbackThursday” on Instagram compared to 390,000 on TikTok, a ratio of 141:1. If #ThrowbackThursday were part of this study, the disparity on the two platforms would rank similarly to #Tiananmen, one of the greatest in the Institute’s report.

The problem with most of these complaints, as their authors acknowledge, is that there is a known input and a perceived output, but there are oh-so-many unknown variables in the middle. It is impossible to know how much of what we see is a product of intentional censorship, unintentional biases, bugs, side effects of other decisions, or a desire to cultivate a less stressful and more saccharine environment for users. A report by Exovera (PDF) prepared for the U.S.–China Economic and Security Review Commission indicates exactly the latter: “TikTok’s current content moderation strategy […] adheres to a strategy of ‘depoliticization’ (去政治化) and ‘localization’ (本土化) that seeks to downplay politically controversial speech and demobilize populist sentiment”, apparently avoiding “algorithmic optimization in order to promote content that evangelizes China’s culture as well as its economic and political systems” which “is liable to result in backlash”. Meta, on its own platforms, said it would not generally suggest “political” posts to users but did not define exactly what qualifies. It said its goal in limiting posts on social issues was because of user demand, but these types of posts have been difficult to moderate. A difference in which posts are found on each platform for specific search terms is not necessarily reflective of government pressure, deliberate or not. Besides, it is not as though there is no evidence for straightforward propaganda on TikTok. One just needs to look elsewhere to find it.

Propaganda

The Office of the Director of National Intelligence recently released its annual threat assessment summary (PDF). It is unclassified and has few details, so the only thing it notes about TikTok is “accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022”. It seems likely to me this is a reference to this article in Forbes, though this is a guess as there are no citations. The state-affiliated TikTok account in question — since made private — posted a bunch of news clips which portray the U.S. in an unflattering light. There is a related account, also marked as state-affiliated, which continues to post the same kinds of videos. It has over 33,000 followers, which sounds like a lot, but each post is typically getting only a few hundred views. Some have been viewed thousands of times, others as little as thirteen times as of writing — on a platform with exaggerated engagement numbers. Nonetheless, the conclusion is obvious: these accounts are government propaganda, and TikTok willingly hosts them.

But that is something it has in common with all social media platforms. The Russian RT News network and China’s People’s Daily newspaper have X and Facebook accounts with follower counts in the millions. Until recently, the North Korean newspaper Uriminzokkiri operated accounts on Instagram and X. It and other North Korean state-controlled media used to have YouTube channels, too, but they were shut down by YouTube in 2017 — a move that was protested by academics studying the regime’s activities. The irony of U.S.-based platforms helping to disseminate propaganda from the country’s adversaries is that it can be useful to understand them better. Merely making propaganda available — even promoting it — is a risk and also a benefit to generous speech permissions.

The DNI’s unclassified report has no details about whether TikTok is an actual threat, and the FBI has “nothing to add” in response to questions about whether TikTok is currently doing anything untoward. More secretive information was apparently provided to U.S. lawmakers ahead of their March vote and, though few details of what, exactly, was said, several were not persuaded by what they heard, including Rep. Sara Jacobs of California:

As a member of both the House Armed Services and House Foreign Affairs Committees, I am keenly aware of the threat that PRC information operations can pose, especially as they relate to our elections. However, after reviewing the intelligence, I do not believe that this bill is the answer to those threats. […] Instead, we need comprehensive data privacy legislation, alongside thoughtful guardrails for social media platforms – whether those platforms are funded by companies in the PRC, Russia, Saudi Arabia, or the United States.

Lawmakers like Rep. Jacobs were an exception among U.S. Congresspersons who, across party lines, were eager to make the case against TikTok. Ultimately, the divest-or-ban bill got wrapped up in a massive and politically popular spending package agreed to by both chambers of Congress. Its passage was enthusiastically received by the White House and it was signed into law within hours. Perhaps that outcome is the democratic one since polls so often find people in the U.S. support a sale or ban of TikTok.

I get it: TikTok scoops up private data, suggests posts based on opaque criteria, its moderation appears to be susceptible to biases, and it is a vehicle for propaganda. But you could replace “TikTok” in that sentence with any other mainstream social network and it would be just as true, albeit less scary to U.S. allies on its face.

A Principled Objection

Forcing TikTok to change its ownership structure whether worldwide or only for a U.S. audience is a betrayal of liberal democratic principles. To borrow from Jon Stewart, “if you don’t stick to your values when they’re being tested, they’re not values, they’re hobbies”. It is not surprising that a Canadian intelligence analysis specifically pointed out how those very same values are being taken advantage of by bad actors. This is not new. It is true of basically all positions hostile to democracy — from domestic nationalist groups in Canada and the U.S., to those which originate elsewhere.

Julian G. Ku, for China File, offered a seemingly reasonable rebuttal to this line of thinking:

This argument, while superficially appealing, is wrong. For well over one hundred years, U.S. law has blocked foreign (not just Chinese) control of certain crucial U.S. electronic media. The Protect Act [sic] fits comfortably within this long tradition.

Yet this counterargument falls apart both in its details and if you think about its further consequences. As Martin Peers writes at the Information, the U.S. does not prohibit all foreign ownership of media. And governing the internet like public airwaves gets way more complicated if you stretch it any further. Canada has broadcasting laws, too, and it is not alone. Should every country begin requiring social media platforms comply with laws designed for ownership of broadcast media? Does TikTok need disconnected local versions of its product in each country in which it operates? It either fundamentally upsets the promise of the internet, or it is mandating the use of protocols instead of platforms.

It also looks hypocritical. Countries with a more authoritarian bent and which openly censor the web have responded to even modest U.S. speech rules with mockery. When RT Americatechnically a U.S. company with Russian funding — was required to register as a foreign agent, its editor-in-chief sarcastically applauded U.S. free speech standards. The response from Chinese government officials and media outlets to the proposed TikTok ban has been similarly scoffing. Perhaps U.S. lawmakers are unconcerned about the reception of their policies by adversarial states, but it is an indicator of how these policies are being portrayed in these countries — a real-life “we are not so different, you and I” setup — that, while falsely equivalent, makes it easy for authoritarian states to claim that democracies have no values and cannot work. Unless we want to contribute to the fracturing of the internet — please, no — we cannot govern social media platforms by mirroring policies we ostensibly find repellant.

The way the government of China seeks to shape the global narrative is understandably concerning given its poor track record on speech freedoms. An October 2023 U.S. State Department “special report” (PDF) explored several instances where it boosted favourable narratives, buried critical ones, and pressured other countries — sometimes overtly, sometimes quietly. The government of China and associated businesses reportedly use social media to create the impression of dissent toward human rights NGOs, and apparently everything from university funding to new construction is a vector for espionage. On the other hand, China is terribly ineffective in its disinformation campaigns, and many of the cases profiled in that State Department report end in failure for the Chinese government initiative. In Nigeria, a pitch for a technologically oppressive “safe city” was rejected; an interview published in the Jerusalem Post with Taiwan’s foreign minister was not pulled down despite threats from China’s embassy in Israel. The report’s authors speculate about “opportunities for PRC global censorship”. But their only evidence is a “list [maintained by ByteDance] identifying people who were likely blocked or restricted” from using the company’s many platforms, though the authors can only speculate about its purpose.

The problem is that trying to address this requires better media literacy and better recognition of propaganda. That is a notoriously daunting problem. We are exposed to a more destabilizing cocktail of facts and fiction, but there is declining trust in experts and institutions to help us sort it out. Trying to address TikTok as a symptomatic or even causal component of this is frustratingly myopic. This stuff is everywhere.

Also everywhere is corporate propaganda arguing regulations would impede competition in a global business race. I hate to be mean by picking on anyone in particular, but a post from Om Malik has shades of this corporate slant. Malik is generally very good on the issues I care about, but this is not one we appear to agree on. After a seemingly impressed observation of how quickly Chinese officials were able to eject popular messaging apps from the App Store in the country, Malik compares the posture of each country’s tech industries:

As an aside, while China considers all its tech companies (like Bytedance) as part of its national strategic infrastructure, the United States (and its allies) think of Apple and other technology companies as public enemies.

This is laughable. Presumably, Malik is referring to the chillier reception these companies have faced from lawmakers, and antitrust cases against Amazon, Apple, Google, and Meta. But that tougher impression is softened by the U.S. government’s actual behaviour. When the E.U. announced the Digital Markets Act and Digital Services Act, U.S. officials sprang to the defence of tech companies. Even before these cases, Uber expanded in Europe thanks in part to its close relationship with Obama administration officials, as Marx pointed out. The U.S. unquestionably sees its tech industry dominance as a projection of its power around the world, hardly treating those companies as “public enemies”.

Far more explicit were the narratives peddled by lobbyists from Targeted Victory in 2022 about TikTok’s dangers, and American Edge beginning in 2020 about how regulations will cause the U.S. to become uncompetitive with China and allow TikTok to win. Both organizations were paid by Meta to spread those messages; the latter was reportedly founded after a single large contribution from Meta. Restrictions on TikTok would obviously be beneficial to Meta’s business.

If you wanted to boost the industry — and I am not saying Malik is — that is how you would describe the situation: the U.S. is fighting corporations instead of treating them as pals to win this supposed race. It is not the kind of framing one uses if they wanted to dissuade people from the notion this is a protectionist dispute over the popularity of TikTok. But it is the kind of thing you hear from corporations via their public relations staff and lobbyists, which gets trickled into public conversation.

This Is Not a TikTok Problem

TikTok’s divestment would not be unprecedented. The Committee on Foreign Investment in the United States — henceforth, CFIUS, pronounced “siff-ee-us” — demanded, after a 2019 review, that Beijing Kunlun Tech Co Ltd sell Grindr. CFIUS concluded the risk to users’ private data was too great for Chinese ownership given Grindr’s often stigmatized and ostracized user base. After its sale, now safe in U.S. hands, a priest was outed thanks to data Grindr had been selling since before it was acquired by the Chinese firm, and it is being sued for allegedly sharing users’ HIV status with third parties. Also, because it transacts with data brokers, it potentially still leaks users’ private information to Chinese companies (PDF), apparently violating the fundamental concern triggering this divestment.

Perhaps there is comfort in Grindr’s owner residing in a country where same-sex marriage is legal rather than in one where it is not. I think that makes a lot of sense, actually. But there remain plenty of problems unaddressed by its sale to a U.S. entity.

Similarly, this U.S. TikTok law does not actually solve potential espionage or influence for a few reasons. The first is that it has not been established that either are an actual problem with TikTok. Surely, if this were something we ought to be concerned about, there would be a pattern of evidence, instead of what we actually have which is a fear something bad could happen and there would be no way to stop it. But many things could happen. I am not opposed to prophylactic laws so long as they address reasonable objections. Yet it is hard not to see this law as an outgrowth of Cold War fears over leaflets of communist rhetoric. It seems completely reasonable to be less concerned about TikTok specifically while harbouring worries about democratic backsliding worldwide and the growing power of authoritarian states like China in international relations.

Second, the Chinese government does not need local ownership if it wants to exert pressure. The world wants the country’s labour and it wants its spending power, so businesses comply without a fight, and often preemptively. Hollywood films are routinely changed before, during, and after production to fit the expectations of state censors in China, a pattern which has been pointed out using the same “Red Dawn” anecdote in story after story after story. (Abram Dylan contrasted this phenomenon with U.S. military cooperation.) Apple is only too happy to acquiesce to the government’s many demands — see the messaging apps issue mentioned earlier — including, reportedly, in its media programming. Microsoft continues to operate Bing in China, and its censorship requirements have occasionally spilled elsewhere. Economic leverage over TikTok may seem different because it does not need access to the Chinese market — TikTok is banned in the country — but perhaps a new owner would be reliant upon China.

Third, the law permits an ownership stake no greater than twenty percent from a combination of any of the “covered nations”. I would be shocked if everyone who is alarmed by TikTok today would be totally cool if its parent company were only, say, nineteen percent owned by a Chinese firm.

If we are worried about bias in algorithmically sorted feeds, there should be transparency around how things are sorted, and more controls for users including wholly opting out. If we are worried about privacy, there should be laws governing the collection, storage, use, and sharing of personal information. If ownership ties to certain countries is concerning, there are more direct actions available to monitor behaviour. I am mystified why CFIUS and TikTok apparently abandoned (PDF) a draft agreement that would give U.S.-based entities full access to the company’s systems, software, and staff, and would allow the government to end U.S. access to TikTok at a moment’s notice.

Any of these options would be more productive than this legislation. It is a law which empowers the U.S. president — whoever that may be — to declare the owner of an app with a million users a “covered company” if it is from one of those four nations. And it has been passed. TikTok will head to court to dispute it on free speech grounds, and the U.S. may respond by justifying its national security concerns.

Obviously, the U.S. government has concerns about the connections between TikTok, ByteDance, and the government of China, which have been extensively reported. Rest of World says ByteDance put pressure on TikTok to improve its financial performance and has taken greater control by bringing in management from Douyin. The Wall Street Journal says U.S. user data is not fully separated. And, of course, Emily Baker-White has reported — first for Buzzfeed News and now for Forbes — a litany of stories about TikTok’s many troubling behaviours, including spying on her. TikTok is a well scrutinized app and reporters have found conduct that has understandably raised suspicions. But virtually all of these stories focus on data obtained from users, which Chinese agencies could do — and probably are doing — without relying on TikTok. None of them have shown evidence that TikTok’s suggestions are being manipulated at the behest or demand of Chinese officials. The closest they get is an article from Baker-White and Iain Martin which alleges TikTok “served up a flood of ads from Chinese state propaganda outlets”, yet waiting until the third-to-last paragraph before acknowledging “Meta and Google ad libraries show that both platforms continue to promote pro-China narratives through advertising”. All three platforms label state-run media outlets, albeit inconsistently. Meanwhile, U.S.-owned X no longer labels any outlets with state editorial control. It is not clear to me that TikTok would necessarily operate to serve the best interests of the U.S. even if it was owned by some well-financed individual or corporation based there.

For whatever it is worth, I am not particularly tied to the idea that the government of China would not use TikTok as a vehicle for influence. The government of China is clearly involved in propaganda efforts both overt and covert. I do not know how much of my concerns are a product of living somewhere with a government and a media environment that focuses intently on the country as particularly hostile, and not necessarily undeservedly. The best version of this argument is one which questions the platform’s possible anti-democratic influence. Yes, there are many versions of this which cross into moral panic territory — a new Red Scare. I have tried to put this in terms of a more reasonable discussion, and one which is not explicitly xenophobic or envious. But even this more even-handed position is not well served by the law passed in the U.S., one which was passed without evidence of influence much more substantial than some choice hashtag searches. TikTok’s response to these findings was, among other things, to limit its hashtag comparison tool, which is not a good look. (Meta is doing basically the same by shutting down CrowdTangle.)

I hope this is not the beginning of similar isolationist policies among democracies worldwide, and that my own government takes this opportunity to recognize the actual privacy and security threats at the heart of its own TikTok investigation. Unfortunately, the head of CSIS is really leaning on the espionage angle. For years, the Canadian government has been pitching sorely needed updates to privacy legislation, and it would be better to see real progress made to protect our private data. We can do better than being a perpetual recipient of decisions made by other governments. I mean, we cannot do much — we do not have the power of the U.S. or China or the E.U. — but we can do a little bit in our own polite Canadian way. If we are worried about the influence of these platforms, a good first step would be to strengthen the rights of users. We can do that without trying to governing apps individually, or treating the internet like we do broadcasting.

To put it more bluntly, the way we deal with a possible TikTok problem is by recognizing it is not a TikTok problem. If we care about espionage or foreign influence in elections, we should address those concerns directly instead of focusing on a single app or company that — at worst — may be a medium for those anxieties. These are important problems and it is inexcusable to think they would get lost in the distraction of whether TikTok is individually blameworthy.


  1. Because this piece has taken me so long to write, a whole bunch of great analyses have been published about this law. I thought the discussion on “Decoder” was a good overview, especially since two of the three panelists are former lawyers. ↥︎

Microsoft Recall

By: Nick Heer
23 May 2024 at 01:29

Yusuf Mehdi of Microsoft:

Now with Recall, you can access virtually what you have seen or done on your PC in a way that feels like having photographic memory. Copilot+ PCs organize information like we do – based on relationships and associations unique to each of our individual experiences. This helps you remember things you may have forgotten so you can find what you’re looking for quickly and intuitively by simply using the cues you remember.

[…]

Recall leverages your personal semantic index, built and stored entirely on your device. Your snapshots are yours; they stay locally on your PC. You can delete individual snapshots, adjust and delete ranges of time in Settings, or pause at any point right from the icon in the System Tray on your Taskbar. You can also filter apps and websites from ever being saved. You are always in control with privacy you can trust.

Recall is the kind of feature I have always wanted but I am not sure I would ever enable. Setting aside Microsoft’s recent high-profile security problems, it seems like there is a new risk in keeping track of everything you see on your computer — bank accounts, a list of passwords, messages, work documents and other things sent by a third-party which they expect to be confidential, credit card information — for a rolling three month window.

Microsoft says all the right things about this database. It says it is all stored locally, never shared with Microsoft, access controlled, and user configurable. And besides, screen recorders have existed forever, and keeping local copies of sensitive information has always been a balance of risk.

But this is a feature that creates a rolling record of just about everything. It somehow feels more intrusive than a web browser’s history and riskier than a password manager. The Recall directory will be a new favourite target for malware. Oh and, in addition to Microsoft’s own security issues, we have just seen a massive breach of LastPass. Steal now, solve later.

This is a brilliant, deeply integrated service. It is the kind of thing I often need as I try to remember some article I read and cannot quite find it with a standard search engine. Yet even though I already have my credit cards and email and passwords stored on my computer, something about a screenshot timeline is a difficult mental hurdle to clear — not entirely rationally, but not irrationally either.

⌥ Permalink

❌
❌