Its European rather than from USA so its less dependent on that orange guy in that white/golden house
Their log delivery api is delayed by over 3 days, despite them promising only "up to 5 minutes delay" in their docs: https://docs.bunny.net/cdn/logging
Why isn't it on the status page you might ask? Oh, that's because a delay is not "critical", but I fear I am losing loglines now, their retention is 3 days.
It's an interesting strategy for them, because it doesn't inspire confidence in me about their other offerings. When they can't reliably operate a log delivery API or be transparent about issues, it's hard to trust them with something as critical as a database.
Any Linux distro can have MySQL or Postgres installed in less than five minutes and works out of the box
Even a single core VPS can handle lots of queries per second (assuming the tables are indexed properly and the queries aren't trash)
There are mature open source backup solutions which don't require DB downtime (also available in most package managers)
It's trivial to tune a DB using .conf files (there are even scripts that autotune for you!!!)
Your VPS provider will allow you to configure encryption at rest, firewall rules, and whole disk snapshots as well
And neither MySQL or Postgres ever seem to go down, they're super reliable and stable
Plus you have very stable costs each month
While in public preview, Bunny Database is free.
When idle, Bunny Database only incurs storage costs. One primary region is charged continuously, while read replicas only add storage costs when serving traffic (metered by the hour).
Reads - $0.30 per billion rows
Writes - $0.30 per million rows
Storage - $0.10 per GB per active region (monthly)They don't elaborate, but apparently libSQL has an HTTP API called "Hrana": https://github.com/tursodatabase/libsql/blob/main/docs/HRANA... - if that's what they're exposing, wouldn't it make more sense to call it libSQL-compatible or something?
> We can’t wait to have this available as a preview later in Q2 and truly make global storage a breeze, so keep an eye out!
then apologised for missing that in September 2023 [2]
> We initially announced that we were working on S3 support for Bunny Storage all the way back in 2022. Today, as 2023 is slowly coming to an end, many of our customers continue to follow our blog, hoping for good news about the release.
changing the roadmap to early 2024 [2]
> But we are working aggressively toward shipping S3 compatibility in early 2024.
That same post also has the beautiful "At bunny.net, we value transparency." quote. It's early 2026, and they're literally ignoring my support requests asking about what the roadmap is looking like for this now.
So, do not trust their product or leadership at all.
[1] https://bunny.net/blog/introducing-edge-storage-sftp-support... [2] https://bunny.net/blog/whats-happening-with-s3-compatibility...
If not, it seems like it would be quite a bit of work to implement the synchronization... and I don't understand why one would use it otherwise.
Isn't the operational burden of SQLite the main selling point over Postgres (not one I subscribe to, but that's neither here nor there)? If it's managed, why do I care if it's SQLite or Postgres? If anything, I would expect Postgres to be the friendlier option, since you won't have to worry about eventually discovering that you actually need some feature even if you don't need it at the start of your project. Maybe there are projects that implement SQLite on top of Postgres so you can gradually migrate away from SQLite if you need Postgres features eventually?
It looks like there might be issues in Italy too.
In addition to the other points brought up, it looks like pricing strongly favors Bunny once you're outside of Cloudflare's free tier.
Per billion rows read: Bunny $0.30, Cloudflare $1.00 (first 25B/month free)
Per million rows written: Bunny $0.30, Cloudflare $1.00 (first 50M/month free)
Per GB stored: Bunny $0.10/region, Cloudflare $0.75 (5GB free)
Bunny also has a lot better region selection, 41 available vs. Cloudflare's 6 (see https://developers.cloudflare.com/d1/configuration/data-loca...). Even though Bunny charges storage per region used where Cloudflare doesn't, Bunny still comes out cheaper with 7 regions selected. Bunny lets you choose how many and which regions to replicate across; Cloudflare's region replication is an on/off toggle that is in beta and requires you to use "the new Sessions API" (I don't know what this entails).
The main reason I haven't tried out D1 is that it locks you into using Workers to access the database. Bunny says they have an HTTP API.
I plan to stick with VPSes for compute and storage, but I do like seeing someone (other than Amazon) challenge Cloudflare on their huge array of fun toys for devs to play with.
It depends:
- do you want multi region presence
- do you want snapshot backups
- do you want automated replication
- do you want transparent failover
- do you want load balancing of queries
- do you want online schema migrations with millisecond lock time
- do you want easy reverts in time
- do you want minor versions automatically managed
- do you want the auth integrated with a different existing system
- do you want...
There's a lot that hosted services with extra features can give you. You can do everything on the list yourself of course, but it will take time and unless you already have experience, every point can introduce some failure you're not aware of.
I setup a cron job to store my backups to object storage but everything felt very fragile because if any detail in the chain was misconfigured I'd basically have a broken production database. I'd have to watch the database constantly or setup alerts and notifications.
If there is a ready to go OSS postgres with backups configured you can deploy I'd happily pay them for that.
I tried to test it out as a CDN replacement for Cloudflare but the workflow was a lot different. Instead of just using DNS to put it in front of another website and proxy the requests (the "orange cloud" button), I had to upload all the assets to Bunny and then rewrite the URLs in my app. Was kind of a pain
I would have concerns around backups (ensuring that your backups are actually working, secure, and reliable seems like potentially time intensive ongoing work).
I also don't think I fully understand what is required in terms of security. Do I now have to keep track of CVEs, and work out what actions I need to in response to each one? You talk about firewall rules. I don't know what is required here either.
I'm sure it's not too hard to hire someone who does know how to do these things, but probably not for anything close to the $50/month or whatever it costs to run a hosted database.
Sure, any regular SME can just install Postgres or MySQL without even setting much up except with `mysql_secure_install`, a user with a password and an 'app' database. But you may end up with 10-20 database installs you need to back up, patch and so on every once in a while. And companies value that.
What is the upgrade path?
How often do they release?
Do I have to worry about CVEs?
Who is doing network security?
Who is testing that security?
Where are my credentials stored?
Do I have a dashboard that tracks the hundreds of resources I'm responsible for including this new one?
> Plus you have very stable costs each month
I'm sick and tired of managing linux boxes. It simply doesn't scale in any reasonable way.
Just compare the most recent commits from LibSQL: https://github.com/tursodatabase/libsql/commits/main/
To those of SQLite: https://sqlite.org/src/timeline
One of these looks like a healthy and actively maintained project. The other isn't quite dead, but it's limping along.
And Cloudflare is an american company.
(Context: <https://xkcd.com/1871/>.)
It is not. You can provision a free Postgres instance with a single click: https://neon.new/
And tell me how easily you can achieve this "out of the box"
If you don't care about business continuity or high availability then everything gets easier
> And neither MySQL or Postgres ever seem to go down, they're super reliable and stable
The box they're on goes down
Don’t want to babysit your app database on a VM but not willing to pay the DBaaS tax either? We're building a third way.
Today, we’re launching Bunny Database as a public preview: a SQLite-compatible managed service that spins down when idle, keeps latency low wherever your users are, and doesn’t cost a fortune.
It’s become clear by now that the DBaaS platforms that garnered the love of so many devs are all going upmarket. Removing or dumbing down free tiers, charging for unused capacity, charging extra for small features, or bundling them in higher tiers — you already know the drill.
Hard to blame anyone for growing their business, but it doesn’t feel right when these services stop making sense for the very people who helped popularize them in the first place.
So where does that leave you?
Not every project needs Postgres, and that’s okay. Sometimes you just want a simple, reliable database that you can spin up quickly and build on, without worrying it’ll hit your wallet like an EC2.
That’s what we built Bunny Database for.
What you get:
Get the full tour including how to connect Bunny Database to your app in this quick demo from our DX Engineer, Jamie Barton:
You probably optimize the heck out of your frontend, APIs, and caching layers, all for the sake of delivering an experience that feels instant to your users. But when your database is far away from them, round-trip time starts to add noticeable latency.
The usual fix is to introduce more caching layers, denormalized reads, or other workarounds. That’s obviously no fun.
And when you think about it, devs end up doing this because the popular DBaaS platforms are usually either limited, complex, or too costly when it comes to multi-region deployments. So what looks like a caching problem is actually a data locality issue.
OK, but how bad can it really be?
To find out, we ran a read latency benchmark and measured p95 latency in Bunny Database.
We picked a number of regions across the world and compared round-trip time for client locations ever farther away from the database in:
Turns out serving reads close to clients reduced latency by up to 99%.
Check out the full write-up on the benchmark setup and results here.

While this definitely matters most to apps with global users, data locality does apply to everyone. With Bunny Database, you don’t have to stick to major data center locations and compensate with caching workarounds any more. Instead, you get a lot of flexibility to set up regions in an intuitive interface and it’s easy to switch things up as your requirements change.
Choose between 3 deployment types when creating a database:

All of this lets you start wherever you’d like and add regions as needed, without re-architecting your app.
In the database world, capacity-based pricing gives you some predictability. But no one likes to pay for unused capacity, right?
Serverless, on the other hand, is supposed to be cost-efficient, yet can rack up bills quickly, especially when the DBaaS charges significant markups on top of already pricey compute.
We don’t do hyperscalers, though, so we can charge a fair price for Bunny Database in a usage-based model.
During the public preview phase, Bunny Database is free.
Bunny Database wouldn’t be possible without libSQL, the open-source, open-contribution fork of SQLite created by Turso.
We run Bunny Database on our own fork of libSQL, which gives us the freedom to integrate it tightly with the bunny.net platform and handle the infrastructure and orchestration needed to run it as a managed, multi-region service.
What does this mean for Bunny Database’s upstream feature parity with libSQL and SQLite, respectively?
The short answer is that we don’t currently promise automatic or complete feature parity with either upstream libSQL or the latest SQLite releases.
While libSQL aims to stay compatible with SQLite’s API and file format, it doesn’t move in lockstep with upstream SQLite. We wouldn’t expect otherwise, especially as Turso has shifted focus from libSQL toward a long-term rewrite of SQLite in Rust.
For Bunny Database, this means that compatibility today is defined by the libSQL version we’re built on, rather than by chasing every upstream SQLite or libSQL change as it lands. We haven’t pulled in any upstream changes yet, and we don’t currently treat upstream parity as an automatic goal.
That’s intentional. Our focus so far has been on making Bunny Database reliable and easy to operate as a service. We think bringing in upstream changes only makes sense when they clearly improve real-world use cases, not just to tick a parity checkbox.
If there are specific libSQL features you’d like to see exposed in Bunny Database, or recent SQLite features you’d want us to pull in, we’d love to hear about it. Join our Discord to discuss your use cases and help shape the roadmap!
Speaking of the roadmap, we don’t stop cooking. Here’s what’s coming up next:
There’s even more to come, but it’s too soon to spill the beans yet, especially while we’re in public preview. We’d love to hear your feedback, so we can shape what ships next together.
Bunny Database works standalone and fits right into your stack via the SDKs (or you can hook up anything using the HTTP API). But it also plays nicely with Bunny Edge Scripting and Bunny Magic Containers.
To connect your database to an Edge Script or a Magic Containers app, simply go to the Access tab of the chosen database and click Generate Tokens to create new access credentials for it.
Once they’re generated, you’ll get two paths to choose from:
After you complete the setup, the database URL and access token will be available as environment variables in your script or app. Use them to connect to your database:
import { createClient } from "@libsql/client/web"``;
const client = createClient``({
url``: process.env.``DB_URL``,
authToken``: process.env.``DB_TOKEN});
const result = client``.``execute``(``"SELECT * FROM users"``);
You can find more detailed, step-by-step integration instructions in the docs:
We can’t wait to see what you’ll build with Bunny Database and what you think of it. During the public preview phase, you get 50 databases per user account, each capped at 1 GB, but we hope this should be more than enough for lots of fun projects.
Just sign in to the bunny.net dashboard to get started. Happy building!
It's a similar process to Cloudflare. Point the NS to them and enable the proxy for a domain or subdomain.
They even support websockets.
Why they cant do is the TUnnel stuff, or at least fake it. I have ipv6 servers, and I can't have the IPv4 Bunny traffic go to the ipv6 only sources.
Asking because I was looking at both Cloudflare and Bunny literally this week...and I feel like I don't know anything about it. Googling for it, with "hackernews" as keyword to avoid all the blogspam, didn't bring up all that much.
(I ended up with Cloudflare and am sure that for my purposes it doesn't matter at all which I choose.)
> When S3 compatibility is enabled (currently in beta), the number of available replication points is reduced
I assume it's a private beta.
https://docs.bunny.net/storage/storage-tiers#s3-compatibilit...
Cloudflare, Fly.io litestream offerings and Turso are pretty reasonably priced, given the global coverage.
AWS with Aurora is more expensive for sure and isn’t edge located if I recall correctly, so you don’t get near instant propagation of changes on the edge
The bigger thing for me is how much control you have. So far with these edge database providers you don’t have a ton of say in how things are structured. To use them optimally, I have found it works best if you are doing database-per-tenant (or customer) scenarios or using it as a read / write cache that gets exfiltrated asynchronously.
And that is where I believe the real cost factors come into play is the flexibility
I've been had :(
Even as a managed service, Postgres DBaaS still tends to push users into capacity planning, instance tiers, and paying for idle headroom. Using a SQLite-compatible engine lets us offer a truly usage-based model with affordable read replication and minimal idle costs.
Hopefully it will be fixed soon.
Also, not sure about now, but historically Turso didn't have to best uptime.
I was testing IPv6 origin support (they don’t support it), and they billed me $2 for a couple of test requests. I was testing at the end of the month.
With other providers, this would have cost only a few cents.
So? Not everyone needs 99.999999% availability.
Has this situation changed?
(don't use CNAME flattening with DNS-routed CDNs like Bunny though, if you must use an apex domain then use the CDNs integrated nameservers)
- The free CDN is basically unusable with my ISP Telekom Germany due to a long-running and well documented peering dispute. This is not necessarily an issue with Cloudflare itself, but means that I have to pay for the Pro plan for every domain if I want to have a functioning site in my home country. The $25 per domain / project add up.
- Cloudflare recently had repeated, long outages that took down my projects for hours at a time.
- Their database offering (D1) had some unpredictable latency spikes that I never managed to fully track down.
- As a European, I'm trying to minimize the money I spent on US cloud services and am actively looking for European alternatives.
> This feature is currently in the closed beta stage. It is not available for use currently, but it's expected to be in the near future. We appreciate your interest in it and will mark your ticket so we can notify you when it's available.
I've been trying out Bunny recently and it looks like a very viable replacement for most things I currently do with Cloudflare. This new database fills one of the major gaps.
Their edge scripting is based on Deno, and I think is pretty comparable to e.g. Vercel. They also have "magic containers", comparable to AWS ECS but (I think) much more convenient. It sounds from the docs like they run containers close to the edge, but I don't know if it's comparable to e.g. Lambda@Edge.
What is the problem with doing that?
Bunny has a similarity concept: https://bunny.net/edge-scripting/
Serverless, managed databases and even multicloud won't save you. You'll still have to be on call.
Don't want to be on call? Design your stuff so it works local first.