Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
Are the other providers offering much better uptime GitLab, CircleCI, Harness? Saying this as someone that's been GH exclusive sicne 2010.
I am able to access api.github.com at 20.205.243.168 no problem
No problem with githubusercontent.com either
Hopefully the hobbyists are willing to shell out for tokens as much as they expect.
Today, when I was trying to see the contribution timeline of one project, it didn't render.
I would love to pay Codeberg for managed hosting + support. GitLab is an ugly overcomplicated behemoth... Gitea offers "enterprise" plans but do they have all the needed corporate features? Bitbucket is a joke, never going back to that.
It has been a pretty smooth process. Although we have done a couple of pieces of custom development:
1) We've created a Firecracker-based runner, which will run CI jobs in Firecracker VMs. This brings the Foregjo Actions running experience much more closely into line with GitHub's environment (VM, rather than container). We hope to contribute this back shortly, but also drop me a message if this is of interest.
2) We're working up a proposal[1] to add environments and variable groups to Forgejo Actions. This is something we expect to need for some upcoming compliance requirements.
I really like Forgejo as a project, and I've found the community to be very welcoming. I'm really hoping to see it grow and flourish :D
[0]: https://lithus.eu, adam@
[1]: https://codeberg.org/forgejo/discussions/issues/440
PS. We are also looking at offering this as a managed service to our clients.
Github is down so often now, especially actions, I am not sure how so many companies are still relying on them.
(Actually there's 3 I'm currently working, but 2 are patched already, still closing the feedback loop though.)
I have a 2-hour window right now that is toddler free. I'm worried that the outage will delay the feedback loop with the reporter(s) into tomorrow and ultimately delay the patches.
I can't complain though -- GitHub sustains most of my livelihood so I can provide for my family through its Sponsors program, and I'm not a paying customer. (And yet, paying would not prevent the outage.) Overall I'm very grateful for GitHub.
If they don't get their ops house in order, this will go down as an all-time own goal in our industry.
This should not be normal for any service, even at GitHub's size. There's a joke that your workday usually stops around 4pm, because that's when GitHub Actions goes down every day.
I wish someone inside the house cared to comment why the services barely stay up and what kinds of actions are they planning to do to fix this issue that's been going on years, but has definitely accelerated in the past year or so.
Edit: Looks like they've got a status page up now for PRs, separate from the earlier notifications one: https://www.githubstatus.com/incidents/smf24rvl67v9
Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
Gerrit is the other option I'm aware of but it seems like it might require significant work to administer.
Codeberg gets hit by a fair few attacks every year, but they're doing pretty well, given their resources.
I am _really_ enjoying Worktree so far.
"A better way is to self host". [0]
Simple: the US stopped caring about antitrust decades ago.
Investigating - We are investigating reports of impacted performance for some GitHub services. Feb 09, 2026 - 15:54 UTC
But I saw it appear just a few minutes ago, it wasn't there at 16:10 UTC.
Github isn't the only source control software in the market. Unless they're doing something obvious and nefarious, its doubtful the justice department will step in when you can simply choose one of many others like Bitbucket, Sourcetree, Gitlab, SVN, CVS, Fossil, DARCS, or Bazaar.
There's just too much competition in the market right now for the govt to do anything.
It's definitely some extra devops time, but claude code makes it easy to get over the config hurdles.
Dunno about actions[1], but I've been using a $5/m DO droplet for the last 5 years for my private repo. If it ever runs out of disk space, an additional 100GB of mounted storage is an extra $10/m
I've put something on it (Gitea, I think) that has the web interface for submitting PRs, reviewing them, merging them, etc.
I don't think there is any extra value in paying more to a git hosting SaaS for a single user, than I pay for a DO droplet for (at peak) 20 users.
----------------------
[1] Tried using Jenkins, but alas, a $5/m DO droplet is insufficient to run Jenkins. I mashed up shell scripts + Makefiles in a loop, with a `sleep 60` between iterations.
Hosting .git is not that complicated of a problem in isolation.
They literally have the golden goose, the training stream of all software development, dependencies, trending tool usage.
In an age of model providers trying train their models and keep them current, the value of GitHub should easily be in the high tens of billions or more. The CEO of Microsoft should be directly involved at this point, their franchise at risk on multiple fronts now. Windows 11 is extremely bad. GitHub going to lose their foundational role in modern development shortly, and early indications are that they hitched their wagon to the wrong foundational model provider.
EDIT: You mention this with archive.org links! Love it! https://mrshu.github.io/github-statuses/#about
All the more reason why they should be sliced and diced into oblivion.
The engineers who build the early versions were folks at the top of their field, and compensated accordingly. Those folks have long since moved on, and the whole thing is maintained by a mix of newcomers and whichever old hands didn't manage to promote out, while the PMs shuffle the UX to justify everyones salary...
It's pretty nice if you don't mind it being some of the heaviest software you've ever seen.
I also tried gitea, but uninstalled it when I encountered nonsense restrictions with the rationale "that's how GitHub does it". It was okay, pretty lightweight, but locking out features purely because "that's what GitHub does" was just utterly unacceptable to me.
A people have replied to you mentioning Codeberg, but that service is intended for Open Source projects, not private commercial work.
That doesn't normally happen to platforms of this size.
One solution I see is (eg) internal forge (Gitlab/gitea/etc) and then mirrored to GH for those secondary features.
Which is funny. If GH was better we'd just buy their better plan. But as it stands we buy from elsewhere and just use GH free plans.
[1] https://www.theverge.com/tech/865689/microsoft-claude-code-a...
That's not at all how you measure uptime. The per area measures are cool but the top bar measuring across all services is silly.
I'm unsure what they are targeting, seems across the board it's mostly 99.5+ with the exception of Copilot. Just doing math, 3 (independent, which I'm aware they aren't fully) 99.5 services brings you down to an overall "single 9" 98.5 healthy status but it's not meaningful to anyone.
just add a new git remote and push. less so for issues and and pulls, but at least your dev team/ci doesn't end up blocked.
That is what that feature does. It imports issues and code and more (not sure about "projects", don't use that feature on Github).
It's extra galling that they advertise all the new buzzword laden AI pipeline features while the regular website and actions fail constantly. Academically I know that it's not the same people building those as fixing bugs and running infra, but the leadership is just clearly failing to properly steer the ship here.
Mirroring is probably the way forward.
Also very happy with SourceHut, though it is quite different (Forgejo looks like a clone of GitHub, really). The SourceHut CI is really cool, too.
You did back it up, right? Right before you ran me with `--allow-dangerously-skip-permissions` and gave me full access to your databases and S3 buckets?
Distributed source control is distributable.
Something this week about "oops we need a quality czar": https://news.ycombinator.com/item?id=46903802
Usually an outage is not a big deal, I can still work locally. Today I just happen to be in a very GH-centric workflow with the security reports and such.
I'm curious how other maintainers maintain productivity during GH outages.
Edit- oh you probably meant an alternative to GitHub perhaps..
Not who you're responding to, but my 2 cents: for a popular open-source project reliant on community contributions there is really no alternative. It's similar to social media - we all know it's trash and noxious, but if you're any kind of public figure you have to be there.
There are probably tons of baked in URLs or platform assumptions that are very easy to break during their core migration to Azure.
> During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes.
That's for sure not perfect, but there was also a 95% chance that if you have re-run the job, it will run and not fail to start. Another one is about notificatiosn being late. I'm sure all others do have similar issues people notice, but nobody writes about them. So a simple "to many incidents" does bot make the stats bad - only an unstable service the service.
Radicle is the most exciting out of these, imo!
Investigating - We are investigating reports of degraded performance for Pull Requests Feb 09, 2026 - 16:19 UTC
"Whoops, now that one is nuked too. You have any more backups I can practice my shell commands on?"
I doubt policymakers in the early 1900s could have predicted the impact of technology and globalization on the corporate landscape, especially vis a vis “vertical integration”.
Personally, I think vertical integration is a pretty big blind spot in laws and policies that are meant to ensure that consumers are not negatively impacted by anticompetitive corporate practices. Sure, “competition” may exist, but the market activity often shifts meaningfully in a direction that is harmful consumers once the biggest players swallow another piece of the supply chain (or product concept), and not just their competitors.
There was just a recent case with Google to decide if they would have to sell Chrome. Of course the Judge ruled no. Nowadays you can have a monopoly in 20 adjacent industries and the courts will say it's fine.
Not really. It's a network effect, like Facebook. Value scales quadratically with the number of users, because nobody wants to "have to check two apps".
We should buy out monopolies like the Chinese government does. If you corner the market, then you get a little payout and a "You beat capitalism! Play again?" prize. Other companies can still compete but the customers will get a nice state-funded high-quality option forever.
Does this mean you are only half-sarcastic/half-joking? Or did I interpret that wrong?
I’m guessing they’re regretting it.
But the inward-looking point is correct: git itself is a distributed technology, and development using it is distributed and almost always latency-tolerant. To the extent that github's customers have processes that are dependent on services like bug tracking and reporting and CI to keep their teams productive, that's a bug with the customer's processes. It doesn't have to be that way and we as a community can recognize that even if the service provider kinda sucks.
As an alternative, I thought mainly as a secondary repo and ci in case that Github stops being reliable, not only as the current instability, but as an overall provider. I'm from the EU and recently catch myself evaluating every US company I interact with and I'm starting to realize that mine might not be the only risk vector to consider. Wondering how other people think about it.
Edit: Nevermind, looks like they migrated to github since the last time I contributed
GitHub is under Microsoft’s CoreAI division, so that’s a pretty sure bet.
https://www.geekwire.com/2025/github-will-join-microsofts-co...
ISTR that the lift-n-shift started like ... 3 years ago? That much of it was already shifted to Azure ... 2 years ago?
The only thing that changed in the last 1 year (if my above two assertions are correct (which they may not be)) is a much-publicised switch to AI-assisted coding.
Copilot seems to be the worst offender, and 99% of people using Github likely couldn't care less.
Computers can produce spreadsheets even better and they can warm the air around you even faster.
The other change is reluctance to break up companies. AT&T break up was big deal. Microsoft survived being broken up in its antitrust trial. Tech companies can only be broken up vertically, but maybe the forced competition would be enough.
But you need to have pieces that are independent enough to run some here and some there, and ideally pieces that can fail without taking down the whole system.
1. Stateful systems (databases, message brokers) are hard to switch back-and-forth; you often want to migrate each one as few times as possible.
2. If something goes sideways -- especially performance-wise -- it can be hard to tell the reason if everything changed.
3. It takes a long time (months/years) to complete the migration. By doing it incrementally, you can reap the advantages of the new infra, and avoid maintaining two things.
---
All that said, GitHub is doing something wrong.
* Deploy everything * It explodes * Rollback everything * Spend two weeks finding problem in one system and then fix it * Deploy everything * It explodes * Rollback everything * Spend two weeks finding a new problem that was created while you were fixing the last problem * Repeat ad nauseum
Migrating iteratively gives you a foundation to build upon with each component
Business by spreadsheet is super hard for this reason - if you try to charge the maximum you can before people get angry and leave then you're a tiny outage/issue/controversy/breach from tipping over the wrong side of that line.
Pages and Packages completed in 2025.
Core platform and databases began in October 2025 and are in progress, with traffic split between the legacy Github data center and Azure.
All kinds of companies lose millions of dollars of revenue per day if not hour if their sites are not stable.... apple, amazon, google, Shopify, uber, etc etc.
Those companies have decided the extra complexity is worth the reliability.
Even if you're operating a tech company that doesn't need to have that kind of uptime, your developers probably need those services to be productive, and you don't want them just sitting there either.
Our SOC2 doesn't specify GitHub by name, but it does require we maintain a record of each PR having been reviewed.
I guess in extremis we could email each other patch diffs, and CC the guy responsible for the audit process with the approval...
Good news! You can't create new PRs right now anyway, so they won't pile.
* writing endless reports and executive summaries
* pretending to know things that they don't
* not complaining if you present their ideas as yours
* sycophancy and fawning behavior towards superiors
If a company can build a monopoly (or oligopoly) in multiple markets, it can then use these monopolies to build stability for them all. For example, Google uses ads on the Google Search homepage to build a browser near-monopoly and uses Chrome to push people to use Google Search homepage. Both markets have to be attacked simultaneously by competitors to have a fighting chance.
I have cleaned up more than enough of them.
I'm grateful it arrived, but two and half hours feels less than ideal.
Not on the 2-4 hour latency scale of a GitHub outage though. I mean, sure, if you have a process that requires the engineering talent to work completely independently on day-plus timescales and/or do all their coordination offline, then you're going to have a ton of trouble staffing[1] that team.
But if your folks can't handle talking with the designers over chat or whatnot to backfill the loss of the issue tracker for an afternoon, then that's on you.
[1] It can obviously be done! But it's isomorphic to "put together a Linux-style development culture", very non-trivial.
The inertia is not permanent.
> Those companies have decided the extra complexity is worth the reliability.
Companies always want more money and yes it makes sense economically. I'm not disagreeing with that. I'm just saying that nobody needs this. I grew up in a world where this wasn't a thing and no, life wasn't worse at all.
Does it handle queries, trigger CI actions, run jobs?
Of course, you need some way of producing test loads similar to those found in production. One way would be to take a snapshot of production, tap incoming requests for a few weeks, log everything, then replay it at "as fast as we can" speed for testing; another way would be to just mirror production live, running the same operations in test as run in production.
Alternatively, you could take the "chaos monkey" approach (https://www.folklore.org/Monkey_Lives.html), do away with all notions of realism, and just fuzz the heck out of your test system. I'd go with that, first, because it's easy, and tends to catch the more obvious bugs.