Is your time worth more than what the fully managed services on AWS cost? And I mean that quite literally in the sense of your billable hours.
If you like spending time tinkering with manually configuring linux servers, and don't have anything else that generates better value for you to do - by all means go for it.
For small pet projects with no customers, a cheap Hetzner box is probably fine. For serious projects with customers that expect you to be able to get back on your feet quickly when shit hits the fan, or teams of developers that lose hours of productivity when a sandbox environment goes down, maybe not so much.
Is AWS more expensive than the salary of the infra guy you would need to hire to do all this stuff by hand? Probably not.
why? why be so obnoxious to other people who you claim are being obnoxious to you. no need to read your blog post now/
He could have summed up with "AWS is expensive, host your own server instead".
vcpu, iops, transfer fees, storage -- they are all resources going into a pool .
If Hetzner is giving you 10TB for $100 , then host your static files/images there and save $800.
Apps are very modular. You have services, asyncs, LBs, static files . Just put the compute where it is most cost effective.
You don't have to close your AWS account to stick it to the man. Like any utility, just move your resources to where they are most affordable.
I built my own DIY cloud, a minimal self-hosted setup that lets me deploy apps, databases, and backups on cheap VPS or bare-metal servers. At first I just wanted to save money, but I realized that managing it can easily become a full-time SRE job.
Still, for small, experimental, or hobby projects, this setup works perfectly. It keeps costs predictable and gives full control without needing large-scale infrastructure.
https://diy-cloud.remikeat.com
Would love to hear from others who tried leaving managed clouds behind.
Both of these trucks can technically be used to pick up groceries and commute. But, uh, if you bought the semi-truck to get groceries and commute? Nobody scammed you; you bought the wrong truck. You don't have to buy the biggest, most expensive truck to do small jobs. But also, just because there's a cheaper truck available, doesn't mean the semi-truck is overpriced or a scam. The semi is more expensive for a reason.
I wonder about people who write articles like these. I imagine at one point he believed he had to use the cloud, so he started using it without understanding what he was doing. As a result, he was charged a ton of money. He then found out there were cheaper, simpler ways to do what he wanted. And so, feeling hurt, embarrassed, and deceived, he writes this long screed, accusing people of scamming him - I mean scamming you - because you (not him!) could not possibly need to use the cloud, even though you (not him!) assumed you had to use the cloud.
Yes, dude. The cloud is expensive. Sorry you found out the hard way. And for what it's worth, you don't need a datacenter either; stick a 1U in your closet and call it a day.
I don't think AWS is particularly expensive at all. In fact, it's rather cheap at quite a few things if you know what you're doing and use the right services. I've got all my server backups going to S3, for which I pay a whopping penny a month. I also host a handful of Docker images on ECR for 7 cents a month - way cheaper than any other private hosting service I know of.
Now RDS, and the equivalent at every other hosting provider I know of, gets quite pricey indeed. IMO, you shouldn't bother unless you have pretty serious needs for reliability and speed. Apt install your DB server of choice on bare metal goes quite far. Or possibly pull a container of it into whatever you're using to manage containers.
Strawman arguments, ad hominem attacks and Spongebob mocking memes, and the casual venturing into conspiracy theories and malicious intentions...
> Why do all these people care if I save more money or not? ... If they’re wrong, and if I and more people like me manage to convince enough people that they’re wrong, they may be out of a job soon.
I have a feeling AWS is doing fine without him. Cloud is one of the fastest growing areas in tech because their product solves a need for certain people. There is no larger conspiracy to keep cloud in business by silencing dissent on Twitter.
> You will hear a bunch of crap from people that have literally never tried the alternative. People with no real hands-on experience managing servers for their own projects for any sustained period of time.
This is more of a rant than a thoughtful technical article. I don't know what I was expecting, because I clicked on the title knowing it was clickbait, so shame on me, I guess...
Is this what I'm missing by not having Twitter?
Source: I worked at such a company.
That being said, the cloud does have a lot of advantages:
- You're getting a lot of services readily available. Need offsite backups? A few clicks. Managed database? A few clicks. Multiple AZs? Available in seconds.
- You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware) and everything is available right now [0]
- Peak-heavy loads can be a lot cheaper. Mostly irrelevant for you average compute load, but things are quite different if you need to train an LLM
- Many services are already certified according to all kinds of standards, which can be very useful depending on your customers
Also, engineering time and time in general can be expensive. If you are a solo entrepreneur or a slow growth company, you have a lot of engineering time for basically free. But in a quick growth or prototyping phase, not to speak of venture funding, things can be quite different. Buying engineering time for >150€/hour can quickly offset a lot of saving [1].
Does this apply to most companies? No. Obviously not. But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.
[0] Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
[1] Just to be fair, debugging cloud errors can be time consuming, too, and experienced AWS engineers will not be cheaper. But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
This means that teams must make an up-front architectural decision to develop apps in a server-agnostic manner, and developers must stay disciplined to keep components portable from day one, but you can get a lot of mileage out of free credits without burning dollars on any infrastructure. The biggest challenge becomes finding the time to perform these migrations among other competing priorities, such as new feature development, especially if you're growing fast.
Our startup is mostly built on Google Cloud, but I don't think our sales rep is very happy with how little we spend or that we're unwilling to "commit" to spending. The ability to move off of the cloud, or even just to another cloud, provides a lot of leverage in the negotiating seat.
Cloud vendors can also lead to an easier risk/SLA conversation for downstream customers. Depending on your business, enterprise users like to see SLAs and data privacy laws respected around the globe, and cloud providers make it easy to say "not my problem" if things are structured correctly.
My recollection from working at a tech company in the early 2010s is that renting rack space and building servers was expensive and time consuming, estimating what the right hardware configuration would be for your business was tricky, and scaling different services independently was impossible. also having multi regional redundancy was rare (remember when squarespace was manually carrying buckets of petrol for generators up many flights of stairs to keeps servers online post sandy?[1]).
AWS fixed much of that. But maybe things have changed in ways that meaningfully changes the calculus?
[1] https://www.squarespace.com/press-coverage/2012-11-1-after-s...
At the time I did this no one had good gaming CPUs in the cloud, they are still a bit rare really especially in VPS offerings and I was hosting a gaming community and server. So I made a pair of machines in a 1U with dual machines in there and had a public and private server with raid 1 drives on both and redundant power. Ran that for a gaming server for many years until it was obsolete. It wasn't difficult and I think the machine was about £1200 in all, which for 2 computers running game servers wasn't too terrible.
I didn't do this because it was necessarily cheaper, I did it because I couldn't find a cloud server to rent with a high clockspeed CPU in it. I tested numerous cloud providers, sent emails asking for specs and after months of chasing it down I didn't feel like I had much choice. Turned out to be quite easy and over the years it saved a fortune.
The biggest threat to cloud vendors is that everyone wakes up tomorrow and cost optimizes the crap out of their infrastructure. I don’t think it’s hyperbolic to say that global cloud spending could drop by 50% in 3 months if everyone just did a good audit and cleaned up their deployments.
> The whole debate of “is this still the cloud or not” is nonsense to me. You’re just getting lost in naming conventions. VPS, bare metal, on-prem, colo, who cares what you call it. You need to put your servers somewhere. Sure, have a computer running in your mom’s basement if that makes you feel like you’re exiting the cloud more, I’ll have mine in a datacenter and both will be happy.
However, one situation where I think the cloud might be useful is for archive storage. I did a comparison between AWS Glacier Deep Storage and local many-hard-drive boxes, for storing PB-scale backups, and AWS just squeaked in as slightly cheaper, but only because you only pay for the amount you use, whereas if you buy a box then you have to pay for the unused space. And it's off-site, which is a resilience advantage. And the defrosting/downloading charge was acceptable at effectively 2.5 months worth of storage. However, at smaller scales you would probably win with a small NAS, and at larger scales you'd be able to set up a tape library and fairly comprehensively beat AWS for price.
I once got into an argument with a lead architect about it and it's really easy to twist the conversation into "don't you think we'll reach that scale?" To justify complexity.
The bottom line is for better or worse, the cloud and micro services are keeping a lot of jobs relevant and there's no benefit in convincing people otherwise
On AWS an Aurora RDS is not cheap. But I don't have to spend time or money on an admin.
Is the cost justified? Because that's what cloud is. Not even talking about the level of compliance I get from having every layer encrypted when my hosted box is just a screwdriver away from data getting out the old school way.
When I'm small enough or big enough, self managed makes sense and probably is cheaper. But when getting the right people with enough redundancy and knowledge is getting the expensive part...
But actually - I've never seen this in any if these arguments so far. Probably because actual time required to manage a db server is really unpredictable.
* off-site db backups
* a guaranteed db restore process
* auditable access to servers
* log persistence and integrity
* timely security patching
* intrusion detection
so that my employer can save money.
However, there are cases where being able to spin down the server, and not pay for downtime is useful - like 36-core Yocto build machines.
Agreed. These sort of takedowns usually point to a gap in the author's experience. Which is totally fine! Missing knowledge is an opportunity. But it's not a good look when the opportunity is used for ragebait, hustlr.
I think it is a lot safer for backups to be with an entirely different provider. It protects you in case of account compromise, account closure, disputes.
If using cloud and you want to be safe, you should be multi-cloud. People have been saved from disaster by multi-cloud setups.
> You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware)
Not true for VPSes or rented dedicated servers either.
> Peak-heavy loads can be a lot cheaper.
they have to be very spiky indeed though. LLMs might fit but a lot of compute heavy spiky loads do not. I saved a client money on video transcoding that only happened once per upload and only over a month or two an year by renting a dedi all ear round rather than using the AWS transcoding service.
> Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
You have to do work to ensure things run across multiple availability zones (and preferably regions) anyway.
> But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
You have more forced upgrades.
An unmanaged database will only need a lot of work if operating at large scale. If you are then its probably well worth employing a DBA anyway as an AWS or similar managed DB is not going to do all the optimising and tuning a DBA will do.
The "is this cloud or not" debate in the piece makes perfect sense. Who cares whether Hetzner is defined as "the cloud" or not? The point is, he left AWS without going to Azure or some other obvious cloud vendor. He took a step towards more hands on management. And he saved a ton of money.
If you can't drive to the location where your stuff is running, and then enter the building blindfolded, yet put your hands on the correct machine, then it's cloud.
Besides, you can just put it behind cloudflare for free.
The entire site is cached + Cloudflare sits on top of everything. I just ran a couple performance tests under the current HN traffic (~120 concurrent visitors) and everything looks good, all loads under 1 second. The server is quite happy at an average load of 0.06 right now, not even close to start breaking a sweat.
Turns out you can get off the cloud and hit the frontpage of HN and your site will be alright.
This, and also startups are quite heterogeneous. If you have an engineer on your team with experience in hosting their own servers (or at least a homelab-person), setting up that service with sufficient resiliency for your average startup will be done within one relaxed afternoon. If your team consists of designers and engineers who hardly ever used a command line, setting up a shaky version of the same thing will cost you days - and so will any issue that comes up.
AWS + Azure have both gone down with major outages indivudually more over the last 10 years than any of the servers in companies I worked with in the 10 years before that.
And in comparable periods, not a single server failed or disk failed or whatever.
So I get SOME companies need hot standby servers, almost no company, no SaaS, no startup, actually does.
Because if it's that mission critical, then they would have already had to move off the cloud due to how frequently AWs/Azure/etc. have gone down over the last 10 years, often for 1/2 day or so,
For personal projects, honestly, the built in roles AWS provides are okay enough for some semblance of least privilege x functionality IMO.
Plus, most of AWS's documentation tells you the specific policy JSON to use if you need to do XYZ thing, just fill in the blanks.
https://github.com/ipfs/kubo/issues/10327
https://discuss.ipfs.tech/t/moved-ipfs-node-result-netscan-d...
>This happens with Hetzner all the time because they have no VLANs and all customers are on a single LAN and IPFS tries to discover other nodes in the same LAN by default.
When did Linode and DO got dropped and not being part of the cloud ?
What used to separate VPS and Cloud was resources at per second billing. Which DO and Linode along with a lot of 2nd tier hosting also offer. They are part of cloud.
Scaling used to be an issue, because buying and installing your hardware or sending it to DC to be installed and ready takes too much time. Dedicated Servers solution weren't big enough at the time. And the highest Core count at the time was 8 core Xeon CPU in 2010. Today we have EPYC Zen 6c at 256 Core and likely double the IPC. Scaling issues that requires a Rack of server can now be done with 1 single server and fit everything inside RAM.
Managed database? PlanetScale or Neon.
A lot of issues for medium to large size project that "Cloud" managed to solve are no longer an issue in 2025. Unless you are top 5-10% of project that requires these sort of flexibilities.
You'd be amazed by how far you can get with a home linux box and cloudflare tunnels.
1. For small stuff, AWS et al aren't that much more expensive than Hetzner, mostly in the same ballpark, maybe 2x in my experience.
2. What's easy to underestimate for _developers_ is that your self hosted setup is most likely harder to get third party support for. If you run software on AWS, you can hire someone familiar with AWS and as long as you're not doing anything too weird, they'll figure it out and modify it in no time.
I absolutely prefer self hosting on root servers, it has always been my go to approach for my own companies, big and small stuff. But for people that can't or don't want to mess with their infrastructure themselves, I do recommend the cloud route even with all the current anti hype.
It's really shitty that we all need to pay this tax, but I've been just asked about whether our company has armed guards and redundant HVAC systems in our DC, and I wouldn't know how to do that apart from saying that 'our cloud provider has all of those'.
When does the cloud start making sense ?
I had someone on this site arguing that Cloudflare isn't a cloud provider...
2x is the same ballpark???
At first I remember thinking they were behind the times, but now I'm certain they've avoided wasting a fortune on the cloud.
BTW in-case you've missed it [1] Anthropic is offering $250 free credits till Nov 18 for their new Claude Code on the Web [2] for Clause Pro/Max users.
Another advantage is that if you aim to provide a global service consumed throughout the world then cloud providers allow you to deploy your services in a multitude of locations in separate continents. This alone greatly improves performance. And you can do that with a couple of clicks.
What is saving? _Spending less_, that's all. Saving generates no income, it makes you go broke slower.
Independent of the price or the product, you can never save more than factor 1.0 (or 100%).
Wasn't there a guy on TV who wanted to make prices go down 1500%? Same BS, different flavor.
Right. But none of the cloud providers encourage that mode of thinking, since they all have complete different frontends, API's, different versions of the same services (load balancers, storage) etc. Even if you standardize on k8s, the implementation can be chalk and cheese between two cloud providers. The lock in is way worse with cloud providers.
Renting dedicated servers was really expensive. To the extent that it was cheaper for us to buy a 1U server and host it in a local datacenter. Maintaining that was a real pain. Getting the train to London to replace a hard drive was so much fun. CDNs were "call for pricing". EC2 was a revelation when it launched. It let us expand as needed without buying servers or paying for rack space, and try experiments without shoving everything onto one server and fiddling with Apache configs in production. Lambda made things even easier (at the expense of needing new architectures).
The thing that has changed is that renting bare metal is orders of magnitude cheaper, and comparable in price to shared hosting in the bad old days.
I'd argue that Docker has done that in a LOT of ways. The huge draw to AWS, from what I recall with my own experiences, was that it was cheaper than on-prem VMware licenses and hardware. So instead of virtualizing on proprietary hypervisors, firms outsourced their various technical and legal responsibilities to AWS. Now that Docker is more mature, largely open source, way less resource intensive, and can run on almost any compute hardware made in the last 15 years (or longer), the cost/benefit analysis starts to favor moving off AWS.
Also AWS used to give out free credits like free candy. I bet most of this is vendor lock in and a lot of institutional brain drain.
Everything else was just reference material for how to sell it to your management.
https://learn.microsoft.com/en-us/azure/cloud-adoption-frame...
TL;DR version - its about money and business balance sheets, not about technology.
For businesses past a certain size, going to cloud is a decision ALWAYS made by business, not by technology.
From a business perspective, having a "relatively fixed" ongoing cost (which is an operational expense ie OpEx ) even if it is significantly higher than what it would cost to do things with internal buy and build out (which is a capital expense cost ie CapEx), make financial planning, taxes and managing EBITDA much easier.
Note that no one on the business really cares what the tech implications are as long at "tech still sorta runs mostly OK".
It also, via financial wizardry, makes tech cost "much less" on a quarter over quarter and year over year basis.
Because of proximity it was easy to run over and service the systems physically if needed, and we also used modem based KVM systems if we really needed to reboot a locked up system quickly (not sure that ever actually happened!).
I'm sure customer owner hardware place in a datacenter rack is still a major business
Then the article should be titled as
"Send this article to your friend who still thinks that AWS is a good idea"
or
"Save costs by taking a step towards more hands on management"
or
"How I saved money moving from AWS to Hetzner"
I'm perfectly happy to use cloud stuff for work. I will never ever give any cloud platform my personal credit card. I don't want to be the position of a misconfiguration leaving my finances at the mercy of the AWS forgiveness gods.
Disclosure: I work for Cloudflare, but 25 years ago served my own site from an iBook under the stairs.
Learn 2 load balance
And I've had enough cases where the company relied on just that one guy who knew how things worked - and when they retired or left, you had big work ahead understanding the systems that guy maintained and never let anyone else touch. Yes, this might also be a leadership issue - but it's also an issue if you have no one else with that specific knowledge. So I prefer standardized, prepackaged, off the shelf solutions that I can hire replacable people for.
I had two projects reach the front page of HN last year, everything worked like a charm.
It's unlikely I'll ever go back to professional hosting, "cloud" or not.
Despite some pages issuing up to 8 database queries, I haven't seen responses take more than about 4 - 5 ms to generate. Since I have 16 GB of RAM to spare, I just let SQLite mmap the whole the database and store temp tables in RAM. I can further optimize the backend by e.g. replacing Tera with Askama and optimizing the SQL queries, but the easiest win for latency is to just run the binary in a VPS close to my users. However, the current setup works so well that I just see no point to changing what little "infrastructure" I've built. The other cool thing is the fact that the backend + litestream uses at most ~64 MB of RAM. Plenty of compute and RAM to spare.
It's also neat being able to allocate a few cores on the same machine to run self-hosted GitHub actions, so you can have the same machine doing CI checks, rebuilding the binary, and restarting the service. Turns out the base model M4 is really fast at compiling code compared to just about every single cloud computer I've ever used at previous jobs.
If you're at an early/smaller stage you're not doing anything too fancy either way. Even self hosted, it will probably be easy enough to understand that you're just deploying a rails instance for example.
It only becomes trickier if you're handling a ton of traffic or apply a ton of optimizations and end up already in a state where a team of sysadmin should be needed while you're doing it alone and ad-hoc. IMHO the important part would be to properly realize when things will get complicated and move on to a proper org or stack before you're stuck.
There's a question of whether you want to spend time learning AWS or spend time learning your DB's hand-rolled backup options (on top of the question of whether learning AWS's thing even absolves you of understanding your DB's internals anyways!)
I do think there's value in "just" doing a thing instead of relying on the wrapper. Whether that's easier or not is super context and experience dependent, though.
Quant happy, boss happy, all good. Then the boss goes for lunch with someone and comes back slightly disturbed. We were not buzzword compliant. Apparently the other guy made him feel that he was using outdated tech by not being on AWS, using auto-scaling etc;
Here I am, from a background where my first language was 8086 assembly, and compactness was important to me. I remember thinking, "This whole thing could run on a powerful calculator, except for the RAM requirement".
It was a good lesson for me. Most CTOs know this bias and have unnecessarily huge and wasteful budgets but make sure they keep the business heads happy in the comfort that the firm is buzzword compliant. Efficiency and compactness are a marketing liability for IT heads!
The issues are mostly in the SME segment and where it really depends on what your business is. Do you need completely separate system for each customer? In that case, AWS is going to be easier and probably cheaper. Are you running a constant low 24/7? Then you should consider buying your own servers.
It's really hard to apply a blanket conclusion to all industries, in regards to cloud cost and whether or not it's worth it. My criticism in regards to opting for cloud is that people want all the benefits of the cloud, but not use any of the features, because that would lock them into e.g. AWS. If you're using AWS as a virtual machine platform only, there's always going to be cheaper (and maybe better) options.
For this guy who spent thousands per month on AWS, using AWS makes no sense.
For a guy like me whose AWS bill was 38 cents last month, AWS makes a lot of sense. Currently I have a small KaiOS app with about 1000 unique visitors per month hosted on AWS. I used to self-host on an old laptop, but it costs less money to host on AWS (1 kwh cost me 19.89 cents and my laptop uses about 4.5kwh per month). Not to mention the convenience of not having to restart my server for software updates, or update my own certs every 3 months, or fuddling with nginx configurations.
And my AWS bill ever starts climbing, I will just self host once again.
Clarity of expression is a superpower
I don’t feel it’s pedantic at all.
Cost of item = 10
First discounted cost of item = 9
=> First saving = 1
Second discounted cost of item = 6
=> Second saving = 4
Second saving is 4x first saving.
(Edit - formatting)Edit: Thinking about this some more: You could say you are saving 9x [of the new cost], and it would be a correct statement. I believe the error is assuming the reference frame is the previous cost vs the new cost, but since it is not specified, it could be either.
Bang for the buck is unmatched, and none of the endless layers of cloud abstraction getting in the way. A fixed price, predictable, unlimited bandwidth, blazing fast performance. Just you and the server, as it's meant to be. I find it a blissful way to work.
Second, egress data being very expensive with ingress being free has contributed to making them sticky gravity holes.
Edit: although actually many people on here are American so I guess for you aws is legally a person...
Well there's no danger of that. Even with AWS telling you exactly how to save money (they have a thousand different dashboards showing you where you can save, and even support will tell you), it'll still take you months to work through all the cost optimization changes. Since it's annoying and complicated to do, most people won't do it.
Their billing really is ridiculous. We have a TAM and use a reseller, and it's virtually impossible for us to see what we actually spend each month, what with the reseller discounts, enterprise PPA, savings plans, RIs, and multiple accounts. Their support reps even built us some kinda custom BI tool just to look at costs, and it's still not right.
We have to use cloud because we're at the low end of 10^5 servers. Once you hit the high end of 10^3 this is really where you need to be.
Everything we're doing is highly inefficient because of decades of fast and loose development by immature software engineers...and having components in the stack that did the same.
If I had 5 years to rewrite and redesign the product to reflect today's computing reality, I could eliminate 90%+ of the cost. But I'll never get that kind of time. Not with 1000 more engineers and 1000 more years and the most willing customers.
You might get lucky enough that you and a bunch of your customers are so fed up with your company that you get to create the competition.
One thing to remember is that you do need to treat your servers as "cloud servers" in the sense that you should be able to re-generate your entire setup from your configuration at any time, given a bunch of IPs with freshly-installed base OSs. That means ansible or something similar.
If you insist on using cloud (virtual) servers, do yourself a favor and use DigitalOcean, it is simple and direct and will let you keep your sanity. I use DO as a third-tier disaster recovery scenario, with terraform for bringing up the cluster and the same ansible setup for setting everything up.
I am amused by the section about not making friends saying this :-) — most developer communities tend to develop a herd mentality, where something is either all the rage, or is "dead", and people are afraid to use their brains to experiment and make rational decisions.
Me, I'm rather happy that my competitors fight with AWS/Azure access rights management systems, pay a lot of money for hosting and network bandwith, and then waste time on Kubernetes because it's all the rage. I'll stick to my boring JVM-hosted Clojure monolith, deployed via ansible to a cluster of physical servers and live well off the revenue from my business.
I recently read a fantastic comment or article on how developers, and IT workers in general, will be the first to be automated away, and groups like lawyers and doctors the last, entirely unrelated to how good LLMs and other AI models are at the specific tasks of each. Reason being that developers fall over themselves to prove how productive they are, happy to use all these tools. Anywhere else is concerned with job protection first and foremost. Metrics like SLOC having come from devs themselves in the chase of a raise.
This paragraph reminds me of that so much. Calling out those who push the cloud for job security. In basically every other sector - where this kind of behavior is 10x more prevalent - no one calls this out, there's a sense of solidarity and mutual understanding that this kind of thing is necessary.
This isn't judging one or the other to be good or bad. But it's going to cost devs very dearly.
I don't remember where I read it, they put it much better than I can - if anyone here recognizes it please drop a link.
I'm quite sympathetic to on-prem but there's a possible argument against.
Anybody have any guides or guidance on making the migration? I feel comfortable managing the VPS, but not sure what else I'd need to take on by moving to a dedicated server. I imagine hardware monitoring would be one, but what else?
The main reason why I keep coming back to cloud providers is databases. I don't feel comfortable setting up a high-availability db setup, and I don't want the responsibility of managing backups. But if you go to, say, Hetzner, you won't be able to use a cloud database in the same network.
It may seem strange to you, but PHP servers work very well.
I think it's worth a try.
But as soon as you start dealing with data sovereignty for your SaaS app, and need co-lo's in multiple geographies, each with their own contract/billing cycles, HA and backup, the global reach and consistency of the cloud providers starts seeming quite compelling again.
The vast majority of us that are actually technically capable are better served self hosting.
Especially with tools like cloudflare tunnels and Tailscale.
Saved 10x would imply there was an amount being saved that they multipled.
Being pedantic about words means you think effective communication is somehow wrong. Be precise, don’t be pedantic.
Yes you can, because speed has units of inverse time and latency has units of time. So it could be correct to say that cutting latency to 1/10 of its original value is equivalent to making it 10x the original speed - that's how inverses work.
Savings are not, to my knowledge, measured in units of inverse dollars.
Reading author's article:
> For me, that meant:
> RDS for the PostgreSQL database (my biggest monthly cost, in fact)
> EC2 for the web server (my 2nd biggest monthly cost)
> Elasticache for Redis
But why small businesses like them, I can’t I understand. I host all my stuff on Hetzner. It’s easy, it’s affordable and I “move fast”. I don’t need certificates to be able to manage the servers, and there is tone of information available online. But maybe it’s because I’m old, and I used to ssh to a Linux machine, git pull the latest version, and call in a “deploy”. But modern developed seems to be afraid of raw machines.
The only thing I miss is a good transactional email provider with PAYG pricing, and EU data residency.
As far as I can tell, you'd almost have to do something with a colo if you didn't want to pay 10x or more for the storage. Are there other options?
However, in most (every?) large organisations, buying a server and putting it in a DC, and looking after it, is hugely time-consuming, lengthy and expensive.
You need to get quotes, approvals, purchase, rack, commission, maintain etc etc. It is usually close to 1 year to get a server.
In some companies, even getting a virtual server takes almost as long!
With AWS, once an account and service is approved, boom, you are done.
How could I not use the cloud?
Backups that are stored with the same provider are good, providing the provider is reliable as a whole.
(Currently going through the disaster recovery exercise of, "What if AWS decided they didn't like us and nuked our account from orbit.")
apt install automysqlbackup autopostgresqlbackup
Though if you have proper filesystem snapshots then they should always see your database as consistent, right? So you can even skip database tools and just learn to make and download snapshots.
> How could I not use the cloud?
Funnily enough, one of my side projects has its (processed) primary source of truth at that exact size. Updates itself automatically every night adding a further ~18-25 million rows. Big but not _big_ data, right?
Anyway, that's sitting running happily with instant access times (yay solid DB background) on a dedicated OVH server that's somewhere around £600/mo (+VAT) and shared with a few other projects. OVH's virtual rack tech is pretty amazing too, replicating that kind of size on the internal network is trivial too.
It’s a paperwork thing, not a financial decision.
On the other hand, if I request a virtual server, it takes less than a week, and I can work with it much more freely.
Sure you can do it cheaper, but money isn’t really the problem. Most cloud spend is less than the cost of a handful of senior devs a month.
Did you try crunching some of the numbers with him? I would hope a quant could also understand following the common wisdom can sometimes cost you more.
If running on your own data center, or renting physical/virtual machines from ie Hetzner, you will pay for that capability overhead for 30.5 days per month, when in reality you only need it for 2-3 days.
With the cloud you can simply scale dynamically, and while you end up paying more for the capacity, you only pay when you use it, meaning you save money for most of the month.
But also, that’s extremely easily handled with physical servers - there are NVMe drives that are 10x as large.
Your use case is the _worst_ use case for the cloud.
I think this is an important point. It's quick.
When cloud got popular, doing what you did could take upwards of 3 months in an organisation, with some being closer to 8 months. The organisational bureaucracy meant that any asset purchase was a long procedure.
So, yeah, the choices were:
1. Wait 6 months to spend out of capex budget
Or
2. Use the opex budget and get something in 10m.
We are no longer in that phase, so cloud services makes very little sense now because you can still use the opex budget to get a VPS and have in going in minutes with automation.
Back when AWS was starting, this would have taken 1-3 days.
Seconded. I was working for a storage vendor when AWS was first ascendant. After we delivered hardware, it was typically 6-12 weeks to even get it powered up, and often a few weeks longer to complete deployment. This is with professional services, e.g. us handling the setup once we had wires to plug in. Similar lead time for ordering, racking, and provisioning standard servers.
The paperwork was massive, too. Order forms, expense justifications, conversations with Legal, invoices, etc. etc.
And when I say 6-12 weeks, I mean that was a standard time - there were outliers measured in months.
In 2006 when the first EC2 instances showed up they were on par with an ok laptop and would take 24 months to pay enough in rent to cover the cost of hardware.
Today the smallest instance is a joke and the medium instances are the size of a 5 year old phone. It takes between 3 to 6 months to pay enough in rent to cover the cost of the hardware.
What was a great deal in 2006 is a terrible one today.
Et al. = et alii, "and other things", "among other things".
Etc. = et cetera, "and so on".
Either may or may not apply to people depending on context.
Snarky ignorant comments like yours ruin Hacker News and the internet as a whole. Please reconsider your mindset for the good of us all.
Disclosure: I work for Cloudflare.
Learn 2 HA
Learn 2 MFA
Learn 2 backup
Learn 2 recover within RTO
Learn 2 ETL
Learn 2 queue
Learn 2 scale horizontally
Learn 2 audit log
Learn 2 SEIM
Learn 2 continuously gather SOC evidence
...
But a burning data centre is just one problem. Another is e.g. the USE1 issue AWS had. AZ didnt help here. Having compute in a other region an rerouting might. But maybe we aint talking startup concerns here. (If you have a simple enough "app" just deploy again in another region and update dns.)
This is the general direction in which society is going. There is no dialogue. You are either with us or against us. Sad.
Yes, but not with
> TypeScript and CDK
Unless your business includes managing infrastructure with your product, for whatever reason (like you provision EC2 instances for your customers and that's all you do), there is no reason to shoot yourself in the foot with a fully fledged programming language for something that needs to be as stable as infrastructure. The saying is Infrastructure as Code, not with code. Even assuming you need to learn Terraform from scratch but already know Typescript, you would still save you time compared to learning CDK, figuring out what is possisble with it, and debugging issues down the line.
Corporate legal personhood is actually older than Christianity, and it being applied to businesses (which were late to the game of being allowed to be corporations) is still significantly older than the US (starting with the British East India Company), not a unique quirk of American law.
Many small boxes
Pros:
- Cost. Small boxes cost less. Spend less or spend more as needed. If one dies, cheaper to replace.
- Efficiency. Scaling *down* saves money when load is low. Can schedule specific loads to specific boxes.
- Redundancy. Multiple VMS, OSes, network paths, etc. No single point of failure.
- Zero-downtime. Rolling deployments, upgrades means changes with no user impact.
- System bandwidth. More network links, cpus, kernels, disks, etc = more bandwidth, capacity.
- Performance resilience. A heavily loaded app on one server doesn't affect others.
- Immutability. "Pets" rather than "cattle" uses automation to reduce maintenance/instability.
- Scalability. When you run out of resources, adding more is easy, zero impact.
Cons:
- Does not work with applications that require large memory/cpu.
- Inefficient for apps that require shared filesystem access (as opposed to database).
- Requires smarter architecture to reduce long tail of cross-host calls.
- More transient network path failures, troubleshooting issues.
One big box
Pros:
- Allows applications which require large memory/cpu.
- More efficient for apps that share a filesystem.
- Simpler architecture.
- Fewer network path failures.
Cons:
- Large cost that you can't easily reduce as needed.
- Waste (in unused resources) unless load is constant.
- Single point of failure, for reliability and security.
- Upgrades require reboots. App goes down; possibility the server might not boot up properly.
- Single network, cpu, kernel, disks(s), etc become bottlenecks.
- A single heavily-loaded process, excess interrupts, etc can bring down entire system performance.
- Often treated as "pet" rather than "cattle"; creates more maintenance, instability.
- Not scalable.I guess this is one of those use cases that justify the cloud. It's hard to host that reliably at home.
For some reason people more easily understand the limits of CPU and memory, but overlook disk constantly.
However, I think there's an implicit point in TFA; namely, that your personal and side projects are not scaling to a 12 TB database.
With that said, I do manage approximately 14 TB of storage in a RAIDZ2 at my home, for "Linux ISOs". The I/O performance is "good enough" for streaming video and BitTorrent seeding.
However, I am not sure what your latency requirements and access patterns are. If you are mostly reading from the 12 TB database and don't have specific latency requirements on writes, then I don't see why the cloud is a hard requirement? To the contrary, most cloud providers provide remarkably low IOPS in their block storage offerings. Here is an example of Oracle Cloud's block storage for 12 TB:
Max Throughput: 480 MB/s
Max IOPS: 25,000
https://docs.oracle.com/en-us/iaas/Content/Block/Concepts/bl...Those are the kind of numbers I would expect of a budget SATA SSD, not "NVMe-based storage infrastructure". Additionally, the cost for 12 TB in this storage class is ~$500/mo. That's roughly the cost of two 14 TB hard drives in a mirror vdev on ZFS (not that this is a good idea btw).
This leads me to guess most people will prefer a managed database offering rather than deploying their own database on top of a cloud provider's block storage. But 12 TB of data in the gp3 storage class of RDS costs about $1,400/mo. That is already triple the cost of the NAS in my bedroom.
Lastly, backing up 12 TB to Backblaze B2 is about $180/mo. Given that this database is for your dev environment, I am assuming that backup requirements are simple (i.e. 1 off-site backup).
The key point, however, is that most people's side projects are unlikely to scale to a 12 TB dev environment database.
Once you're at that scale, sure, consider the cloud. But even at the largest company I worked at, a 14 TB hard drive was enough storage (and IOPS) for on-prem installs of the product. The product was an NLP-based application that automated due diligence for M&As. The storage costs were mostly full-text search indices on collections of tens of thousands of legal documents, each document could span hundreds to thousands of pages. The backups were as simple as having a second 14 TB hard drive around and periodically checking the data isn't corrupt.
(LOL 'customer'. But the point is, when the day comes, I'll be happy to give them money.)
Are ulimits set correctly?
Shall I turn on syn cookies or turn them off because of performance?
What are the things I should know but I don't and Chat GPT has not told me them, as this is more than some intro tutorial on how to run VPS on DO, so it was never indexed by Chat GPT and alikes.
Is all of my software on the server up to date? Is there any library I use exploited, zero day attacks are on me too, blocking bots, etc. What if I do some update but it will turn out that my Postgres version is not working correctly anymore? This is all my problem.
What if I need to send emails? These days doing this ourselves is a dark art by itself (IP/domain address warming up, checking if my domain has not ended on some spam list, etc.).
What if I need to follow some regulations, like European Union GDPR compliance? Have I done everything what is needed to store personal data as GDPR requires? Is my DB password stored in a compliant way or I will face a fine up to 10% of my incomes.
This is not black/white situation as the author tries to present and those who use cloud services are not dumbards who are buying some IT version of snake oil.
Your example them was a weasel worded advert that uses meaningless terminology to make something sound big (savings in this case).
People don't use that expression.
The author expressed the thought in a valid way because we understood their words, but they did not communicate it well because we are confused about their understanding of those ideas.
You were commonly given a network uplink and a list of public IP addresses you were to set up on your box or boxes. IPMI/BMC were not a given on a server so if you broke it, you needed to have remote hands and probably brains too.
Virtualisation was in the early days and most of the services were co-hosted on the server.
Software defined networks and Open vSwitch were also not a thing back then. There were switches with support for VLANs and you might have had a private network to link together frontend and backend boxes.
Servers today can be configured remotely. They have their own management interfaces so you can access the console and install OS remotely. The network switches can be reconfigured on the fly, making the network topology reconfigurable online. Even storage can be mapped via SAN. The only hands on issue is hardware malfunction.
If I was to compare with today, it was like having a wardrobe of Raspberry Pies on a dumb switch, plugging in cables when changes were needed.
Looks about right if you ask me
And learning something arguably better, like Cloudformation / Terraform / SST, is still a hurdle.
That's not a startup if you can't go straight to the founder and get a definite yes/no answer in a few minutes.
How many pets do you want to be tending to? I have 10^5 servers I'm responsible for...
The quantity and methods the cloud affords me allow me to operate the same infrastructure with 1/10th as much labor.
At the extreme ends of scale this isn't a benefit, but for large companies in the middle this is the only move that makes any sense.
99% of posts I read talking about how easy and cheap it is to be in the datacenter all have a single digit number of racks worth of stuff. Often far less.
We operate physical datacenters as well. We spend multiple millions in the cloud per month. We just moved another full datacenter into the cloud and the difference in cost between the two is less than $50k/year. Running in physical DCs is really inefficient for us for a long of annoying and insurmountable reasons. And we no longer have to deal with procurement and vendor management. My engineers can focus their energy on more valuable things.
Well 2 commands...
aws rds export-task create \
--source-arn <SnapshotArn> \
--s3-bucket-name <Bucket> \
--iam-role-arn <Role>
Then copy it down aws s3 cp \
<S3 Location> \
<Local Dir> --recursive
The biggest effort would be then running the Apache Parquet to CSV tool on it.I agree that it's a suboptimal and click baity way to phrase it though...
> x literally means multiply
And some use the dot operator or even 2(3) or (2)(3). When programming, we tend to use *.
I keep seeing this take on here and it just shows most people don't actually know what you can do off the cloud. Hertzner allows you to rent servers by the hour, so you can just do that and only pay for the 2-3 days you need them.
Then why does anybody use cloudflare?
Tricky networking though.
Tbf it just sounds...so American, so I assumed, my bad. But East India Company was involved...whew I guess that does make sense, oof.
But if this happens at work and we're using Google Cloud, I say "sorry boss, Google messed up".
If we're running our own servers, who do I blame?
It looks like all the disk-optimized examples on that site (still much more expensive than paying for raw disk, barely 5x cheaper than S3, when a disk-optimized colo solution only has ~3% overhead over the disks themselves) are through some no-name provider "HostKey". I suppose beggars can't be choosers, but in the contexst of storage (where systemic failures should be accounted for in the model) are you aware of more than one provider with reasonably priced storage?
Example: He was saving $10 every month before the change. Then he switched and now he is saving 10 x $10, ie $100 every month.
But that's not the case here, right? And that's why the parent post is correct.
I/O is hard to benchmark so it's often ignored since you can just scale up your disks. It's a common gotcha in the cloud. It's not a show stopper, but it blows up the savings you might be expecting.
AFAIK, everyone sending automated emails just uses one of the paid services, like sendmail.
>What if I need to follow some regulations, like European Union GDPR compliance? Have I done everything what is needed to store personal data as GDPR requires? Is my DB password stored in a compliant way or I will face a fine up to 10% of my incomes.
What does this have to do with cloud vs non-cloud? You'll need to manage your data correctly either way.
Cloud infra is touted as obviating the need to hire system administrators, but that's a marketing fabrication. Trying to manage infrastructure without the necessary in-house skills is a recipe for disaster, whether it's in the cloud or on-prem.
Multiple millions in the cloud per month?
You could build a room full of giant servers and pay multiple people for a year just on your monthly server bill.
There are also turnkeys solutions that allow one to spin up a DB, setup replication and backups inside or outside of big cñoud vendors. That is the point of db kubernetes operators for instance.
Not wanting to deal with backups or HA are decent reasons to put a database in the cloud (as long as you are aware how much you are overpaying). Not having a good place to put the server is not a good reason
Also, SAN is often faster then local disk if you have a local SAN.
This is a hidden cost of self-hosting for many in b2b. It's not just convincing management, it's convincing your clients.
This list looks like FUD, to be honest, trying to scare people. Yes, you should be scared of these things, but none of them are magically solved by hosting your stuff in AWS/Azure/Google or any other could provider du jour.
Only if your database files are split across multiple file systems, which is atypical.
And again I'll emphasize proper snapshot, cutting off writes at an exact point in time. A normal file copy cannot safely back up an active database.
The ruling said that since a person has first amendment rights, those same rights extend to a group of people—any group—whether it’s a non profit organization, a corporation, or something else.
Though both of which are probably less than you'd need if you needed a full of rack of space, which I assume is part of the reason that pricing is almost always "contact us". I did not bother getting a quote just for the purpose of this comment. But another thing that people need to be less afraid of, when they're looking to actually spend a few digits of money and not just comment about it, is asking for quotes.
But creating an S3 bucket, an IAM role and attaching policies isn't 30 commands.
More likely load will spread over time for most scenarios and the server will be ready to handle that with lower hardware specs.
Colo will be cheaper I'm sure but it's fundamentally a different comparison you have to pay for drive failures, networking, bandwidth, remote hands, network switches and so on and so forth.
> The free plan is always free, with hard monthly limits that cannot be exceeded or incur any costs.
The blog entry was wordy and repetitive for what it expressed (AI?), but the cloud argument should boil down to a few simple questions:
- Does it get you regulatory certifications you need?
- Do you need to rapidly scale-up / scale-down?
- Can you afford to hire the (minimal) necessary skills to self-administer servers?
- Can you stay on top of security updates?
Add all that together and you get a few customers who should be using the cloud: - Regulated companies who want to punt on certs/attestations
- Small/medium growth-oriented startups (unknown needs, low headcount, focus on building product)
- Companies with hardware demand volatility that exceeds their ability to provision it
That's not "all companies" or "no companies" either way, but it is a very large number of companies who are paying cloud premiums without actually needing or benefiting from the cloud value add...Why would you need to disclose your hosting provider? is that really a concern for hosted services (and if it is, why isn't the customer hosting it in their cloud?)
Where did you read that? The pricing page says 10 credits per GB, and extra credits can be purchased at $10 per 1500 credit. So it's more like $0.067/GB.
But if that is your bottleneck you should be upgrading your DB system regardless of whether you’re on cloud or bare metal.
People receive paychecks, pay bills, buy stuff, with holidays (christmas, x-mas, etc) being even busier.
Load does not even out, and when you have 3 million customers or more, the load is not really insignificant. Nor can you just delay it, or rely on eventual consistency.
And, yes it's a different comparison, but that's why I was asking; I was curious if it was even viable (and initial searches had indicated it wasn't). 4-5x cheaper than S3 as a substrate is potentially workable.
Yes, you can solve the problem with sharding and other tricks, but for many banks, the mainframe is still their main data storage, and it has 60+ years of legacy code on it that is not easily or quickly migrated to modern architecture.
Finance is a heavily regulated industry, so there’s a LOT of compliance that needs to happen, like segregation of duty, traceability, accountability, and other ilities.
Yes, it would probably cost less to run on Hetzner (provided their ISO audits are approved by financial authorities), but dynamically spinning up and down servers would cost more.
You also need fallback plans (regulated industry, critical infrastructure, etc).
It has literally taken years to get AWS and Azure approved in EU.
Wouldn’t you need to do the work to shard regardless of where you’re running?
The major difference lies in infrastructure, particularly networking infrastructure. With cloud providers like Azure, AWS, etc, you can provision your vnet layout, and scale "indefinitely" on the same infrastructure. You don't need to provision new hosts, setup new secrets, or anything like that.
If a data center goes down, you can relatively easy switch to another one, though most financial institutions I know of uses hot/cold setups as hot/hot is essentially twice the money, and they rarely go down for long.
Of course it's all just regular servers underneath, so anything possible with AWS and Azure is also possible with other cloud providers, but the tooling simply isn't there (yet?).
Another issue is ISO auditor compliance. Being a regulated industry, finance (in EU at least) needs certain compliance to be fulfilled, not only regarding the services you consume, but also stuff like the physical locations, or being able to physically inspect the data center if auditors require it.
Microsoft and Amazon has this nailed. I've yet to experience a EU data center not run by FAANG meet the requirements, not that they can't. My best hope so far is probably "Lidl cloud" (forgot the name).
The compliance issues are another big one though, for organizations that are still scaling up and don’t have that know how using cloud is a huge advantage as well.