> this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
EDIT: F-Droid received a $400,000 grant from a single source this year ( https://f-droid.org/2025/02/05/f-droid-awarded-otf-grant.htm... ) so now I’m even more confused about how they decided to hand this server to a single team member to host in unspoken conditions instead of paying basic colocation expenses.
>We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services.
Not clear if "contributor" is a person or an entity. The "hosting services" part make it sound more like a company rather than a natural person.
Yup. But the same can happen in shared hosting/colo/aws just as easily if only one person controls the keys to the kingdom. I know of at least a handful of open source projects that had to essentially start over because the leader went AWOL or a big fight happened.
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
> if only one person controls the keys to the kingdom
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
Colo is when you want to bring your own hardware, not when you want physical access to your devices. Many (most?) colo datacenters are still secure sites that you can't visit.
Every colo I've visited has a system for allowing physical access for our equipment, generally during specific operating hours with secure access card.
I've only ever seen that at data centers that offer colo as more of a side service or cater to little guys who are coloing by the rack unit. All of the serious colocation services I've used or quoted from offer 24/7 site access.
Basically anywhere with cage or cabinet colocation is going to have site access, because those delineations only make sense to restrict on-site human access.
To be quite honest I've never seen a colo that didn't offer access at all. The cheapest locations may require a prearranged escort because they don't have any way to restrict access on the floors, but by the time you get to 1/4 rack scale you should expect 24/7 access as standard.
> 400K would go -fast- if they stuck to a traditional colo setup.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive. I’m not understanding how you think $400K would disappear “fast” unless you think it’s thousands of dollars per month?
For reference, in the US at least, there was/is a company called Joes Data Center in KC who would colo a 1U for $30 or $40 a month. I'd used them for years before not needing it anymore, so not some fly by night company(despite the name).
At that rate, that would buy you nearly 1000 years of hosting.
I was trying to avoid naming exact prices because it becomes argument fodder, but locally I can get good quality colo for $50/month and excellent quality coloration with high bandwidth and good interconnects for under $100 for 1U
I really don’t know where the commenter above was getting the idea that $400K wouldn’t last very long
Ugh. This 100% shows how janky and unmaintained their setup is.
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
As someone that has run many volunteer open source communities and projects for more than 2 decades, I totally get how big "small" wins like this are.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
"I understand this is a volunteer effort, but it's not a good look."
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
I don't have a problem with an open source project I use (and I do use F-Froid) hosting a server in a basement. I do have a problem with having the entire project hosted on one server in a basement, because it means that the entire project goes down if that basement gets flooded or the house burns down or the power goes out for an extended period of time, etc.
Having two servers in two basements not near each other would be good, having five would be better, and honestly paying money to put them in colo facilities to have more reliable power, cooling, etc. would be better still. Computer hardware is very cheap today and it doesn't cost that much money to get a substantial amount of redundancy, without being dependent on any single big company.
Apart from the "someone's basement", as objected to in this thread, it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
If I were running a volunteer project, I would be dumping thousands a month into top-tier hosting across multiple datacenters around the world with global failover.
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
100%. Just as an example I have several racks at home, business fiber, battery backup, and a propane generator as a last resort. Also 4th amendment protections so no one gets access without me knowing about it. I host a lot of things at home and trust it more than any DC.
> Also 4th amendment protections so no one gets access without me knowing about it.
If there's ever a need for a warrant for any of the projects, the warrant would likely involve seizure of every computer and data storage device in the home. Without a 3rd party handling billing and resource allocation they can't tell which specific device contains the relevant data, so everything goes.
So having something hosted at home comes with downsides, too. Especially if you don't control all of the data that goes into the servers on your property.
Isn't a business line quite expensive to maintain per month along with a hefty upfront cost? For a smaller team with a tight budget, just going somewhere with all of that stuff included is probably cheaper and easier like a colo DC.
> Also 4th amendment protections so no one gets access without me knowing about it
A home setup might be able to rival or beat an “edge” enterprise network closet.
It’s not going to even remotely rival a tier 3/4 data center in any way.
The physical security, infrastructure, and connectivity will never come close. E.g. nobody is doing full 2N electrical and environmental in their homelab. And they certainly aren’t building attack resistant perimeter fences and gates around their homes, unless they’re home labbing on a compound in a war torn country.
I read it a bit differently: you don't need to be a mega-corp with millions of servers to actually make a difference for the better. It really doesn't take much!
The issue isn’t the hardware, it’s the fact that it’s hosted somewhere private in conditions they wont name under the control of a single member. Typically colo providers are used for this.
Eh. It's just a different set of trade-offs unless you start doing things super-seriously like Let's Encrypt.
With f-droid their main strength has always been replicable builds. We ideally just need to start hosting a second f-droid server somewhere else and then compare the results.
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
Yikes. They don't need a "special arrangement" for those requirements. This is the bare minimum at many professionally run colocation data centers. There is not a security requirement that can't be met by a data center -- being secure to customer requirements is a critical part of their business.
Maybe the person who wrote that is only familiar with web hosting services or colo-by-the-rack-unit type services where remote-hands services are more commonly relied on. But they don't need to use these services. They can easily get a locked cabinet (or even just a 1/4 cabinet) only they could access.
A super duper secure locked cabinet acessible only to them or anyone with a bolt cutter.
You want to host servers on your own hardare? Uh yikes. Let's unpack this. As a certified AWS Kubernetes professional time & money waster, I can say with authority that this goes against professional standards (?) and is therefore not a good look. Furthermore, I can confirm that this isn't it chief.
Colocation is when you use your own hardware. That's what the word means.
And you're not going to even get close to the cabinet in a data center with a set of bolt cutters. But even if you did, you brought the wrong tool, because they're not padlocked.
Bolt cutters will probably cut through the cabinet door or side if you can find a spot to get them started and you have a lot of time.
Otoh, maybe you've got a cabinet in a DC with very secure locks from europe.... But all are keyed alike. Whoops.
A drill would be easier to bring in (especially if it just looks like a power screwdriver) and probably get in faster though. Drill around the locks/hinges until the door wiggles off.
I'd go with a drill -- but I'm not sure what possible threat vector would have access to the cabinet who would be able to get to the cabinet in any decent data center.
Because it's a secret, we don't know if it's mom's basement where the door doesn't really lock anyways, just pull it real hard, or if it's at Uncle Joey's with the compound and the man trap and laser sensors he bought at government auction through a buddy who really actually works at the CIA.
"F-Droid is not hosted in a data centre with proper procedures, access controls, and people whose jobs are on the line. Instead it's in some guy's bedroom."
It could just be a colo, there are still plenty of data centres around the globe that will sell you a space in a shared rack with a certain power density per U of space. The list of people who can access that shared locked rack is likely a known quantity with most such organisations and I know in the past we had some details of the people who were responsible for it
In some respects, having your entire reputation on the line matters just as much. And sure, someone might have a server cage in their residence, or maybe they run their own small business and it's there. But the vagueness is troubling, I agree.
A picture of the "living conditions" for the server would go a long way.
> State actor? Gets into data centre, or has to break into a privately owned apartment.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
> Data centers are built with redundant network connectivity, backup power, and fire suppression. [...] The question is their relative frequency, which is where the data center is far superior.
Well, I remember one incident were a 'professional' data center burned down including the backups.
I know no such incident for some basement hosting.
Doesn't mean much. I'm just a bit surprised so many people are worried because of the server location and no one had mentioned yet the quite outstanding OVH incident.
>Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet
Or just a warrant and a phone call to set up remote access? In the UK under RIPA you might not even need a warrant. In USA you can probably bribe someone to get a National Security Letter issued.
Depending on the sympathies of the hosting company's management you might be able to get access with promises.
I dare say F-Droid trust their friends/colleagues more than they trust randos at a hosting company.
As an F-Droid user, I think I might too? It's a tough call.
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You actually do not have to trust the people who run f-droid for those apps whose maintainers enroll in reproducible builds and multi-party signing, which only f-droid supports unlike any alternatives.
That looks cool, which might just be the point of your comment, but I don't think it actually changes the argument here.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
F-droid makes the most sense when shipped as the system appstore, along with pinned CA keychains as Calyxos did. Ideally f-droid was compiled from source and validated by the rom devs.
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
Combine that with distributed reproducible builds with signed evidence validated by the app and you end up not having to trust anything but the f-droid app itself on your device.
None of this mitigates the fact that apriori you don't know if you're being served the same package manifest/packages as everyone else - and as such you don't know how many signatures any given package you are installing should have.
Yes, theoretically you can personally rebuild every package and check hashes or whatever, but that's preventative steps that no reasonable threat model assumes you are doing.
Why have we normalized "app stores" that build software whose authors likely already provide packages of?
I've been using Obtainium more recently, and the idea is simple: a friendly UI that pulls packages directly from the original source. If I already trust the authors with the source code, then I'm inclined to trust them to provide safe binaries for me to use. Involving a middleman is just asking for trouble.
App stores should only be distributors of binaries uploaded and signed by the original authors. When they're also maintainers, it not only significantly increases their operational burden, but requires an additional layer of trust from users.
I never questioned or thought twice about F-Droid's trustworthiness until I read that. It makes it sound like a very amateurish operation.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
A "single server" covers a pretty large range of scale, its more about how F-droid is used and perceived. Package repos are infrastructure, and reliability is important. A server behind someone's TV is much more susceptible to power outages, network issues, accidents, and tampering. Again, I don't know that's the case since they didn't really say anything specific.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
If this is the hidden master server that only the mirrors talk to, then it's redundancy is largely irrelevant. Yes, if it's down, new packages can't be uploaded, but that doesn't affect downloads at all. We also know nothing about the backup setup they have.
A lot depends on the threat model they're operating under. If state-level actors and supply chain attacks are the primary threats, they may be better off having their system under the control of a few trusted contributors versus a large corporation that they have little to no influence over.
> It makes it sound like a very amateurish operation.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
e.g. AS41231 has upstreams with Cogent, HE, Lumen, etc... they're definitely not running a shoestring operation in a basement. https://bgp.tools/as/41231
Modern machines go up to really mental levels of performance when you think about it and for a lot of small scale things like F droid I doubt it takes a lot of hardware to actually host it. A lot of its going to be static files so a basic web server could put through 100s of thousands of requests and even on a modest machine saturate 10 gbps which I suspect is enough for what they do.
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
Which is itself kind of suspicious - why can't they say "yeah we pay for Colo in such-and-such region" if that is what they are doing? Why should that be a secret?
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
> 12-year old hardware is only marginally faster than a RPi 5
My 14yo laptop-used-as-server disagrees. Also keep in mind that CPU speeds barely improved between about 2012 and 2017, and 2025 is again a lull https://www.cpubenchmark.net/year-on-year.html
I'm also factoring in the ability to use battery bypass in phones I buy now because they are so powerful, I might want to use them as free noiseless server in the future. You can do a heck of a lot on phone hardware nowadays, paying next to nothing for power and no additional cost on your existing internet connection. A RPi 5 is that same ballpark
Plus the fact that it's been running for 5 years. Does that mean they bought 7 year old hardware back then? Or is that just when it was last restarted?
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
> not hosted in just any data center [...] a long time contributor with a proven track record of securely hosting services
This is ambiguous, it could mean either a contributor's rack in a colocation centre or their home lab in their basement. I'd like to think they meant the former, but I can't deny I understood the latter in my first read.
I think all the criticism of what F-Droid is doing here (or perceived as doing) reflects more on the ones criticising than the ones being criticised.
How many things went upside down and all the "right" things were done (corporate governance, cloud native deployment, automation, etc.). The truth is none of these processes are actually going to make things more secure, and many projects went belly up despite following these kinds of recommendations.
That being said, I am grateful to F-Droid fighting the good fight. They are providing an invaluable service and I, for one, am even more grateful that they are doing it as uncompromisingly as possible (well into discomfort) according to their principles.
I wonder if anyone knows about Droid-ify. Whether it it a safe option, or better to stay away of it?
It showed up one day while I searched about why F-Droid was always so extremely slow to update and download... then trying Droid-ify, that was never a problem any more, it clearly had much better connectivity (or simply less users?)
Good. But I wish PostmarketOS supported more devices. On battery,
tons of kernel patches could be set per device plus a config package in order to achieve the best settings. On software and security...you will find more malware in Play Store than the repos from PmOS/Alpine.
I know it's not a 100% libre (FSF) system, but that's a much greater step towards freedom than Android, where you don't even own your device.
The issue with Linux-based phones is and remains apps. Waydroid works pretty well, but since you need to rely on it so much, you are better off using Graphene or Lineage in the first place.
It's frankly embarrassing how many of the comments on this thread are some version of looking at the XKCD "dependency" meme and deciding the best course of action is to throw spitballs at the maintainers of the critical project holding everything else up.
F Droid is no where near being a critical project holding Android up. The Play Store, and the Play Services themselves are much more critical. Being open source doesn't make you immune from criticism for not following industry standards or being called out for poor security.
> The Play Store, and the Play Services themselves are much more critical.
Critical for serving malware and spyware to the masses, yes. GrapheneOS is based on Android and is far better than a Googled Android variant precisely because it is free of Google junk and OEM crapware.
If you have nothing to install on your device, what's the point of being able to? For me, f-droid is a cornerstone in the android ecosystem. I could source apks elsewhere but it would be much more of a hassle and not necessarily have automatic updates. iOS would become a lot more attractive to me if Android didn't have the ecosystem that's centered around the open apps that you can find on f-droid
At the very least, it's reasonable to expect the maintainers of such a project to be open about their situation when it's that precarious. Why wouldn't you take every opportunity to let your users and downstream projects know that the dependency you're providing is operating with no redundancy and barely enough resources to carry on when things aren't breaking? Why wouldn't they want to share with a highly technical audience any details about how their infrastructure operates?
They're building all the software on a single server, and at best their fallback is a 12 year old server they might be able to put back in production. I'm not making any unreasonable assumptions, and they're not being forthcoming with any reassuring details.
I think both of those POVs are wrong. The whole thing about F-Droid is that they have worked hard on not being a central point of trust and failure. The apps in their store are all in a repo (https://gitlab.com/fdroid/fdroiddata) and they are reproducibly built from source. You could replicate it with not too much effort, and clients just need to add the new repository.
> Another important part of this story is where the server lives and how it is managed. F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol. if they're gonna use gitlab just use a proper setup - bigco is already in the critical path...
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
EDIT: F-Droid received a $400,000 grant from a single source this year ( https://f-droid.org/2025/02/05/f-droid-awarded-otf-grant.htm... ) so now I’m even more confused about how they decided to hand this server to a single team member to host in unspoken conditions instead of paying basic colocation expenses.
It has hosted quite a few famous services.
I doubt OSU is going to host F-Droid. It doesn't even sound like F-Droid would want them to host it.
It's a critical load-bearing component of FOSS on Android.
Not clear if "contributor" is a person or an entity. The "hosting services" part make it sound more like a company rather than a natural person.
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
IDK if they could bag this kind of grant every year, but isn't this the scale where cloud hosting starts to make sense?
Basically anywhere with cage or cabinet colocation is going to have site access, because those delineations only make sense to restrict on-site human access.
Or does it also serve the APKs?
Personally I would feel better about round robin across multiple maintainer-home-hosted machines.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive. I’m not understanding how you think $400K would disappear “fast” unless you think it’s thousands of dollars per month?
Regardless, the ongoing interest on $400K alone would be enough to pay colo fees.
I don't know what kind of rates are available to non-profits, but with 400k in hand you can find nicer rates than 3.3 (as of today, at least).
that covers quite a few colo possibilities.
At that rate, that would buy you nearly 1000 years of hosting.
I really don’t know where the commenter above was getting the idea that $400K wouldn’t last very long
The jury's still out on whether or not this is a good thing.
Of course you have to buy the switches and servers…
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
Having two servers in two basements not near each other would be good, having five would be better, and honestly paying money to put them in colo facilities to have more reliable power, cooling, etc. would be better still. Computer hardware is very cheap today and it doesn't cost that much money to get a substantial amount of redundancy, without being dependent on any single big company.
They can then probably whip up a new hosted server to take over within a few days, at most. Big deal.
They are not hosting a critical service, and running on donations. They are doing everything right.
Some of their points are valid but way too often they're unable to accept that different services aren't always trying to solve the same problem.
Apart from the "someone's basement", as objected to in this thread, it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
If there's ever a need for a warrant for any of the projects, the warrant would likely involve seizure of every computer and data storage device in the home. Without a 3rd party handling billing and resource allocation they can't tell which specific device contains the relevant data, so everything goes.
So having something hosted at home comes with downsides, too. Especially if you don't control all of the data that goes into the servers on your property.
> Also 4th amendment protections so no one gets access without me knowing about it
laughs in FISA
It’s not going to even remotely rival a tier 3/4 data center in any way.
The physical security, infrastructure, and connectivity will never come close. E.g. nobody is doing full 2N electrical and environmental in their homelab. And they certainly aren’t building attack resistant perimeter fences and gates around their homes, unless they’re home labbing on a compound in a war torn country.
Also, even 12-year-old hardware is wicked fast.
With f-droid their main strength has always been replicable builds. We ideally just need to start hosting a second f-droid server somewhere else and then compare the results.
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
Maybe the person who wrote that is only familiar with web hosting services or colo-by-the-rack-unit type services where remote-hands services are more commonly relied on. But they don't need to use these services. They can easily get a locked cabinet (or even just a 1/4 cabinet) only they could access.
You want to host servers on your own hardare? Uh yikes. Let's unpack this. As a certified AWS Kubernetes professional time & money waster, I can say with authority that this goes against professional standards (?) and is therefore not a good look. Furthermore, I can confirm that this isn't it chief.
And you're not going to even get close to the cabinet in a data center with a set of bolt cutters. But even if you did, you brought the wrong tool, because they're not padlocked.
Otoh, maybe you've got a cabinet in a DC with very secure locks from europe.... But all are keyed alike. Whoops.
A drill would be easier to bring in (especially if it just looks like a power screwdriver) and probably get in faster though. Drill around the locks/hinges until the door wiggles off.
Not reassuring.
A picture of the "living conditions" for the server would go a long way.
State actor? Gets into data centre, or has to break into a privately owned apartment.
Criminal/3rd party state intelligence service? Could get into both, at a risk or with blackmail, threats, or violence.
Dumb accidents? Well, all buildings can burn or have an power outage.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
Well, I remember one incident were a 'professional' data center burned down including the backups.
https://en.wikipedia.org/wiki/OVHcloud#Incidents
I know no such incident for some basement hosting.
Doesn't mean much. I'm just a bit surprised so many people are worried because of the server location and no one had mentioned yet the quite outstanding OVH incident.
Or just a warrant and a phone call to set up remote access? In the UK under RIPA you might not even need a warrant. In USA you can probably bribe someone to get a National Security Letter issued.
Depending on the sympathies of the hosting company's management you might be able to get access with promises.
I dare say F-Droid trust their friends/colleagues more than they trust randos at a hosting company.
As an F-Droid user, I think I might too? It's a tough call.
as a year long f-droid user I can't complain
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
Combine that with distributed reproducible builds with signed evidence validated by the app and you end up not having to trust anything but the f-droid app itself on your device.
Yes, theoretically you can personally rebuild every package and check hashes or whatever, but that's preventative steps that no reasonable threat model assumes you are doing.
I've been using Obtainium more recently, and the idea is simple: a friendly UI that pulls packages directly from the original source. If I already trust the authors with the source code, then I'm inclined to trust them to provide safe binaries for me to use. Involving a middleman is just asking for trouble.
App stores should only be distributors of binaries uploaded and signed by the original authors. When they're also maintainers, it not only significantly increases their operational burden, but requires an additional layer of trust from users.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
If this is the hidden master server that only the mirrors talk to, then it's redundancy is largely irrelevant. Yes, if it's down, new packages can't be uploaded, but that doesn't affect downloads at all. We also know nothing about the backup setup they have.
A lot depends on the threat model they're operating under. If state-level actors and supply chain attacks are the primary threats, they may be better off having their system under the control of a few trusted contributors versus a large corporation that they have little to no influence over.
I agree that "behind someone's TV" would be a terrible idea.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
e.g. AS41231 has upstreams with Cogent, HE, Lumen, etc... they're definitely not running a shoestring operation in a basement. https://bgp.tools/as/41231
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
> The previous server was 12 year old hardware
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
My 14yo laptop-used-as-server disagrees. Also keep in mind that CPU speeds barely improved between about 2012 and 2017, and 2025 is again a lull https://www.cpubenchmark.net/year-on-year.html
I'm also factoring in the ability to use battery bypass in phones I buy now because they are so powerful, I might want to use them as free noiseless server in the future. You can do a heck of a lot on phone hardware nowadays, paying next to nothing for power and no additional cost on your existing internet connection. A RPi 5 is that same ballpark
A Dell R620 is over 12 years old and WAY faster than a RPi 5 though...
Sure, it'll be way less power efficient, but I'd definitely trust it to serve more concurrent users than a RPi.
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
https://www.amazon.com/Timetec-Premium-PC4-19200-Unbuffered-...
https://www.amazon.com/MSI-MAG-B550-TOMAHAWK-Motherboard/dp/...
For a server that's replacing a 12 year old system, you don't need DDR5 and other bleeding edge hardware.
Saying this on HN, of course.
This is ambiguous, it could mean either a contributor's rack in a colocation centre or their home lab in their basement. I'd like to think they meant the former, but I can't deny I understood the latter in my first read.
Also, no details on the hardware?
How many things went upside down and all the "right" things were done (corporate governance, cloud native deployment, automation, etc.). The truth is none of these processes are actually going to make things more secure, and many projects went belly up despite following these kinds of recommendations.
That being said, I am grateful to F-Droid fighting the good fight. They are providing an invaluable service and I, for one, am even more grateful that they are doing it as uncompromisingly as possible (well into discomfort) according to their principles.
Not to mention it also simplifies the security of controlling signing keys significantly.
It showed up one day while I searched about why F-Droid was always so extremely slow to update and download... then trying Droid-ify, that was never a problem any more, it clearly had much better connectivity (or simply less users?)
Critical for serving malware and spyware to the masses, yes. GrapheneOS is based on Android and is far better than a Googled Android variant precisely because it is free of Google junk and OEM crapware.
assumptions
Brought to you by the helpful folks who managed to bully WinAmp into retreating from open source. Very productive.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol. if they're gonna use gitlab just use a proper setup - bigco is already in the critical path...