Securing the indie web


#1

Without best security practices, the privacy-preserving policies of a company become irrelevant if faced with a motivated threat actor. This is why GDPR holds companies accountable for their data breaches.

I expect many big companies to rethink their security practices in light of GDPR, which will be one of the major privacy benefits for citizens under the new regulation. But Google, who are considered among the worst offenders against privacy, already achieve digital security far surpassing industry best practices, while employing many of the most capable and renowned security professionals.

Maciej Cegłowski and Tech Solidarity argue that most citizens, journalists, and even activists are best protected with certain Google products like Gmail and ChromeOS if paired with good personal security practices like U2F security tokens. The core assumption here seems to be that your threat model should not include personalised attacks by NSA if that would mean weakening your security posture against hackers.

Having tried out Google Advanced Protection and having made an effort to tighten my other web accounts’ authentication, I can say that Google certainly offer the best account security I’ve seen. For example, there seems to exist no other email provider that offers U2F or a comparably phishing-resilient authentication method. Even ProtonMail, who position themselves with a focus on privacy and security, only offer regular 2FA. Cegłowski also specifically distrusts their security.

While I continue to support ProtonMail, the problem is much worse with smaller indie players. The friendly no-nonsense domain registrar I used to use, iwantmyname.com, only allows for Authy-based 2FA, susceptible to attacks on SMS infrastructure. Micro.blog only has email-based login without any additional account security. And Google Drive and Dropbox are the only cloud storage providers that support U2F.

The point about ChromeOS is also hard to refute. It is the only desktop OS that, like iOS, authenticates hardware using chain of trust, has a read-only system partition, prevents OS rollback to old and insecure versions, and prevents non-trusted code execution. By design, it is even more secure than macOS. A similar point seems to hold true for the Chrome browser, which still has better sandboxing than Safari. Firefox and Tor Browser are even worse.

So, what’s the way forward? Google products track you by default, but every alternative has worse account security. I often try out beautiful indie web projects, but abandon them because they value freedom and decentralisation over security engineering. Many of them couldn’t live up to my security expectations if they tried because securing some platforms requires either the resources or knowledge only companies like Google have. There are privacy products that succeed and don’t have these problems, such as DuckDuckGo and Signal. Projects by EFF like Privacy Badger and HTTPS Everywhere also work inside the Chrome browser to make a difference toward privacy. But I fear that other areas make competition all but impossible, leaving us only with the option of regulating companies like Google.


#2

Security definitely depends on your threat model. I reckon far more people are at risk from having their privacy invaded by Google than they are likely to be threatened by hackers or governments if they use smaller indie services.

(Edit: I know this is a rather brief response to your long considered post, but I can’t help but question Google being chosen as the best possible option for any threat model.)


#3

My main reason for posting was that I’ve been so impressed by Google’s security team, but I share your sentiment:

I would love to see more advocates and researchers in security and privacy come together because I only ever see holistic approaches to one of the two.

Obviously, Aral and you are making a positive impact no matter what, but I would value your comments if you have the time. When I get to it, I’ll talk to some security people as well and see if it gets me anywhere.

I should probably source this, but my assumption was that ransomware and phishing are a common enough threat to make a significant monetary impact on swaths of people. In cases where crypto-locker malware has hit hospitals and infrastructure, I expect it has even cost lives. In terms of web accounts, the Ashley Madison leak has caused victims to commit suicide. I would expect the same risk for a subset of users of any email provider, social media, or cloud storage. Not only do they have “something to hide”, they have something suitable for extortion, thereby adding a small risk of hacking to their threat model.

A company I work with has spent hundreds of our hours and thousands of dollars cleaning up after their website’s CMS was hacked and filled with malware. U2F alone would have saved them, as would the automatic intrusion detection of big-name web hosts. Lesser-known software and services can be secured, but they require security training and awareness by employees that in many cases cannot be expected.

This is why I see it as misguided when technology rights advocates suggest a non-geek use Tails, Firefox, Tutanota, Mastodon, or set up their own server.

I would be interested to hear if you would accept a small risk of horrible security incidents—as mentioned above—to support freedoms and privacy, and avoid social cooling. I have found no moral framework to weigh these against each other.


#4

I know it’s a bit of a late response, but FastMail already supported U2F for about 1.5 years. FastMail, along with DuckDuckGo and Telegram, has allowed me to avoid Google for most of my activities since some years.


#5

Just put _ at the start of all your private code and a double underscore if you’re pythonic and are worried about conflicts.


#7

good work on avoiding a brand :wink:


#8

What do you mean?

Another sentence, because my post needs to be at least 20 characters.


#9

I was just being flippant because of the indieness of the project, kinda. Don’t take me too seriously anyway :slight_smile: