Surely with all of these ridiculous developer productivity gains enabled by AI, they should finally be able to fix all of these ancient issues quickly and clean up the backlog.
Nope, “workforce reduction” thanks to AI again. This charade is getting boring.
Until I got to "One platform, three modes." and my brain just pattern matched "AI slop" and the entire post dissolved into meaningless for me.
I don't know if I can stop my mind reaching this conclusion. I'm sure someone at GitLab made some effort to carefully edit the post... But that it wasn't entirely rooted in a human who'd worked out how this stuff goes, but clearly had lots of AI writing it out... Just made my instinct go "this isn't worth paying attention to after all".
"The Machine Stops" by Forster [0], anyone?
Honestly, I can't believe how repeatedly people ignore or don't know the warning signs put up by previous people.
Yes, it's science fiction, but so is 1984, Brave New World and Pump Six.
When will we go through something between 2001[1] and Tacoma[2]? Will we ever learn?
[0]: https://en.wikipedia.org/wiki/The_Machine_Stops
>The agentic era affords GitLab the largest opportunity in our history as a company, and we're making the structural and strategic decisions to meet it
>Operationally, we grew into a shape that was right for the last era and isn't right for this one
To meet their largest opportunity ever, they believe they need less resources. I'm not sure I understand how that follows.
>We're rewiring internal processes with AI agents, automating the reviews, approvals, and handoffs to speed us up
Is this also in the list of "we create code twice as fast and the bottleneck is review so YOLO no bottleneck?". I've yet to see a convincing justification for this. If anything, if you're going full throttle all the more reason to watch the steering wheel, no?
That said, 8 layers of management is a lot of management, and every line of the message seems like leadership truly believes they are sinking in bureaucracy. Let's see how unneeded those 3 layers they're cutting were.
I'm aware that the defective code was not written by AI but nonetheless, GitLab is what stands between many small organizations and their most precious resources. I was fortunate that 2FA stopped the damage, but what's going to happen the next time? What if my organization is permanently damaged because we taught the machines to go fast and break things, too [1]?
[1] VPN is an option but we're a non-profit with a number of non-technical users, so admittedly we're caught in a balance between making it harder to do things. As much as WireGuard is awesome, there's still a barrier.
This is what we were told
- "you should all use AI, we'll track usage and make that a performance goal"
- "you're strongly encouraged to incorporate AI in the products/service you develop"
- "we need quarterly roadmaps now, reflecting on your increased output
- "half of your team is laid off or redirected to work on improving AI models in a new division with experimental management methods (1:50 IC/manager ratio, performance evaluation by AI) and btw, be ready more layoffs and reorg to come"
At that point, every month there's some surprise, usually not a pretty one from employees point of view. At that stage, I'm completely demotivated. Not clear how all this is going to end.I can't seem to get past this - all these decisions (and a work-force reduction :() are the result of a few days of pondering? I've had stomach aches that have lasted longer ..
That's true, but it's interesting how FizzBuzz as said to be the bete noir of the average dimwitted software developer, and how much cutting-edge engineering organizations used to emphasize code in their recruitment processes.
If writing code is being replaced by "engineering judgement" it's going to need a much smaller cohort of developers. Too many opinions spoil the broth, after all.
[0] https://gitlab.com/gitlab-org/gitlab/-/work_items/588806
> The agentic era multiplies demand for software. Software has been the force multiplier behind nearly every business transformation of the last two decades. The constraint was the cost and time of producing and managing it. That constraint is collapsing. As the cost of producing software collapses, demand for it will expand. Last year, the developer platform market used to be measured in tens of dollars per user per month, this year it is hundreds/user/month and headed to thousands. Not only is the value of software for builders increasing, but we believe there will be more software and builders than ever, and we will serve an increasing volume of both.
Also notable that the workforce reduction they describe doesn't appear to target engineers - they're "nearly doubling the number of independent teams" in R&D and "removing up to three layers of management in some functions".
If investor fears are that AI makes GitLab's business less valuable, including this in their "GitLab Act 2" announcement makes a whole lot of sense:
> The agentic era multiplies demand for software. Software has been the force multiplier behind nearly every business transformation of the last two decades. The constraint was the cost and time of producing and managing it. That constraint is collapsing. As the cost of producing software collapses, demand for it will expand. Last year, the developer platform market used to be measured in tens of dollars per user per month, this year it is hundreds/user/month and headed to thousands. Not only is the value of software for builders increasing, but we believe there will be more software and builders than ever, and we will serve an increasing volume of both.
Wrote a bit more about this on my blog: https://simonwillison.net/2026/May/11/gitlab-act-2/
Yes, letting some LLMs "plan, code, review, deploy" will for sure improve quality and depth of innovation you ship.
Users want a product that delivers the value they are looking for, VCs are looking for infinite AI scale, these do not meet. So founders need to present two different values and visions, one for customers and one for VCs.
In a small early stage company you can pretty easily hide each side from the other so you can deliver value to your customers while dancing the VC dance, but as you get larger its harder.
I think founders will endure and VCs will calm down at some point, but there is going to be some suffering along the way.
Oh and have you heard that they built Cluade code with only 20 people? (ignore 12 years of AI research expertise head-start and that Anthropic now has thousands of developers)
Yeah, sure. A couple of years ago it was Covid overhiring.
You know the one thing that is never ever going to be given as a reason for layoffs? The growing salary-productivity gap.
Two big red flags here.
First git itself is distributed and built for scale.
I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.
Are they going to rebuilt git??
Secondly: a big rebuilt of monolith to services. Firstly there is nothing wrong with a Modulith. Secondly “rebuilt” will cause a lot of busy work without immediate value for customers.
And first of all: this announcement is done due to the stock price not AI The productivity increase with AI is inflated because they want their stock price up.
Sell Gitlab stock while you can. The leadership team has no clue what they are doing.
Sadly non engineering leaders buy into this dogma. AI is very usefull but in my experience doesn’t 10x if you don’t YOLO it.
I have no doubt GitLab has too many employees and can benefit from being a more focused company, but it's tiring reading these layoff posts so chock full of buzzwords. I guess they're desperately hoping if they prognosticate about AI enough it will placate the investors.
If anyone at Gitlab management is reading this; getting your microservices to run fully stateless in a Kubernetes cluster should the #1 goal. No disclaimers about potential risk. It's been 5+ years. Get it together. Stop bolting on minor package management features no one is going to end up using anyways.
I wish them the best of luck with that plan. Middle management is where the institutional knowledge sits on how to actually get shit done despite challenges & broken processes/systems.
It's an even worse plan than eliminating juniors.
Setting aside the whole "I'm not going to pretend otherwise which reads suspiciously like Claude, I don't understand how this is supposed to make employees feel any better. No one knows what's going on and through talking we'll figure it out? Mmmmmmhmmmmmm.
- when you see the word substrate in corporate speak, you know where that’s from…
> Agents open merge requests in parallel, trigger pipelines around the clock, and push commits at a rate no human team ever did. Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. We're doing a generational rebuild of the underlying infrastructure to handle agent-rate work as the default. Git itself is being reengineered for machine scale. The monolith is giving way to modern, API-first, composable services. And agent-specific APIs are being built so agents can act as first-class users of the platform, not as bolted-on consumers of human-shaped interfaces
Is there any broader consensus or information on this? Git doesn't scale? is being rebuilt for agents?! Monoliths are out and services are back? Humans are second class citizens now (human shaped interfaces - bad!!)?
What the hell are they planning to do in there at Gitlab?!
New values: Speed with Quality, Ownership Mindset, Customer Outcomes.
In other words, work harder, not smarter, and no more DEI.
Could someone explain it?
If you have a lot of new stuff to build, and if you're not currently losing money, why start a new initiative with a layoff?
They seem to be mostly reducing headcount of managers and claim (supposedly) to be prioritising engineering.
On top of that their redesign sounds interesting - they want to adapt the platform itself (and concept) to deal specifically with how AI "users" will code and submit changes (and the rate of and interaction of that model) vs humans. We'll see how this plays out but this doesn't sound like a bad idea to me at all (assuming humans of course still get priority).
Yes, and the people who are all-in on agentic AI are, in practically every example I’ve seen, not that. They’re the jackasses giving Claude root access to their prod DB and then writing a blog post about how much they’ve learned from their mistake.
Gogs https://gogs.io/ (this IS gitea btw)
Forgejo https://forgejo.org/
Self hosted or cloud hosted. Also excluding Github because, please just fracking don't.
"We're firing a bunch of people because we think we don't need them anymore due to AI and we'll make more money without them."
There are times when businesses must fire people to stay afloat and it's a business that objectively needs to exist. This isn't one of them, so don't waste everyone's time with your BS, please.
I guess someone will be selling enterprises something that lets them say, "We're doing AI too!" Might as well be gitlab?
Email me subject “gitlab” if interested - thomas@ our domain (I am the cofounder)
Reduce the work force of 30%. I don't know, dude, you didn't convince me.
I once found a looooong bug report thread on their issue tracker 7ish years old that had all the usual waves of promises that a fix might make the next release, then silence, then repeat, and the usual challenges to the bug’s status every time a release happened, plus it saw community members correctly diagnose the problem in the first couple years, then by like year 5 there’s was a (small!) patch posted by a community member with multiple posters confirming it was good and fixed the issue, that the author and others had been begging Google to apply and get in a release for a couple years. There’d been no responses from Google folks for a while.
That might be the worst one I saw, but encountering something like that was a few-times-per-year thing in my android app dev years.
Still. Not a huge fan of this announcement or the general ways the landscape is evolving these days.
I would love to help a non-profit and so, I am curious but what are your thoughts on authentik/authelia and others, can they might help in any use case to what you are suggesting, I would love to have a more in-depth discussion!
Also thanks for working at non-profit, although I am not entirely sure what is about but thanks to your non profits and all the other hard working people working at non profits for a better world once again!
Having said that, UI gripes aside, it works fine as a less complicated replacement for github.
Also their diffing, they use "..." diffing, and ".." is not apparently there in their GUI. As a git diffing tool I found this very odd.
I think that as a corporation promoting the use of AI, they should actually be AI users themselves. They should just rewrite that laggy UI in Svelte, Solid, or even vanilla JS. Any of those would work.
That's how I interpret the move, too.
There's a lot of cool things happening between Gitea/Forgejo, Tangled, and Radical, but I doubt the latter two have any significant usage beyond OSS hobby projects. I'm not sure if the former two do, either.
> GitLab’s six core values are Collaboration, Results for Customers, Efficiency, Diversity, Inclusion & Belonging, Iteration, and Transparency, and together they spell the CREDIT we give each other by assuming good intent. We react to them with values emoji and they are made actionable below.
Since those terms don't speak for themselves individually, it's worth seeing what they're supposed to mean to get a sense of what GitLab is forsaking now. Each section is actually pretty lengthy, so you should go look and skim for yourself.
Here's the page: https://handbook.gitlab.com/handbook/values/
And here's an archive from yesterday, for when that changes: https://web.archive.org/web/20260510150031/https://handbook....
One of the really interesting things about GitLab was that not only did they have employees in a large number of countries but they also published their employee handbook which helped show quite how much work it was to support that:
https://handbook.gitlab.com/handbook/people-group/employment... lists 18 countries right now. I guess they're losing 5 of those.
Here's a permalink to the current version of that page https://gitlab.com/gitlab-com/content-sites/handbook/-/blob/... since it mentions that "Diversity, Inclusion & Belonging is one of our core values" and so is likely to be updated pretty soon!
They even used to have a public payroll.md page detailing how payroll worked in multiple countries - they moved that into their private docs a few years ago but the last public version is here: https://gitlab.com/gitlab-com/content-sites/handbook/-/blob/...
UPDATE: I got the countries piece wrong. The linked OP says:
> Reduced operational footprint: We’re reducing our country footprint because operating in nearly 60 countries does not allow us to give every team member a great experience. We anticipate reducing the number of countries by 30% focused on geos where we have only a handful of people or fewer. Team members who are in good standing and would like to relocate are welcome to do so. We'll continue to serve customers in those markets through our partner network where appropriate.
I said they operated in 18 countries, so clearly my impression was out-dated and incorrect.
Also "We anticipate reducing the number of countries by 30% focused on geos where we have only a handful of people or fewer" suggests to me that it's a 30% cut to countries with "only a handful of people", not a 30% cut to countries overall.
Gitlab is a terrible company, period.
GitHub is publicly destroying itself in a desperate attempt to realize Microsoft's AI dreams, and as its main competitor your response is... to do the same?
Rather than going for a "Humans first, robot assistants welcome" approach which promises to deliver things like stability, reliability, trustworthiness, and human connections, they decide to go all-out on firing the humans and letting bots handle things like code review while explicitly shifting the existing human-first company values towards making the remaining humans responsible for the bot's mistakes.
They could've chosen to market themselves as the sane save haven for the GitHub exodus. Instead they choose to go down in history like Google abolishing "Don't be evil". But hey, I bet chanting "AI! AI! AI!" (albeit quite late to the game) will deliver a very solid lukewarm increase in shareholder value!
What hope slop-maker-users have then?
You have never interacted with Jira?
Seems like a fair assessment. Maybe they should start by getting rid of the people who put that structure in place?
At gitlabs team size, that means every manager has 2-3 reports? Yeah, I'd be cutting layers too.
What is this based on? The only thing I can think of is AI coding tools but only a few companies do it properly. I don't see gitlab capturing any of that spending
Also the whole "removing layers". Today's prof g market video was about the topic. Afaik it was the Coinbase CEO telling the same. Do these people get together to discuss their talking points? Or are they signalling to investors?
GitLab's "internal" workings are surprisingly public, so you can just look at the git history yourself: https://gitlab.com/gitlab-com/content-sites/handbook/blob/ma...
Plenty of time to whip up a dead man's switch.
Productivity gains can also be achieved by reducing scope. The coming issues will be that because of increased productivity (idea -> working code) that software is too bloated, does too much, that product managers will and can say "yes" to everything. Until it becomes unmanageable.
And that's not a new problem, it's what basically every programming adage / wisdom going back 70 years is about.
Has any of the companies who went all in on AI gotten better at their job because they went all in on AI?
So nothing really changes in terms of product development velocity, it’s just headcount reduction.
But that’s not what their own marketing strategy communicates.
I've noticed that the more a company pushes on ownership the more difficult it is to actually execute it.
Are they going to rectify this by laying these people off?
Source: I'm ex-GitLab
> The planning is happening openly, including a voluntary separation window. That creates real uncertainty for our team over the next few weeks, but we believe the outcome will be better for it.
No good way to execute lay-offs, my preference would be to do it like a band-aid. What use is it to do it in open unless they plan on having gladiatorial matches to keep your job. Otherwise it's just like a painful game of Duck Duck Goose.
Like, I know there are actual reasons and incentives here for the ever-present AI pivot. But I think they're stupid and short-sighted incentives.
On the other hand, LLMs seem perfect for triage and finding duplicates, so it's still surprising that they've let it get this bad.
> GitLab has at most eight layers in the company structure (Associate/Intermediate/Senior, Manager/Staff, Senior Manager/Principal, Director/Distinguished, Senior Director, VP/Fellow, Executives, Board).
> [...] You can skip layers but you generally never have someone reporting to the same layer (Example of a VP reporting to a VP).
So they're counting the board of directors as a layer above the CEO.
I'm speculating, but they probably also have an unbalanced tree - you'll often see the IT security chief reporting directly to the CEO (because it's important to keep on top of, and they need authority to do their job) but only having 50 people below them in the org chart.
In some corporations you also sometimes get almost-nonexistent ranks created to smooth over a reorganisation. If a level 5 bureaucrat decides to merge the departments of two of their level 4 bureaucrats, they could demote one of them. Or they could make one into a level 4.5 bureaucrat.
None of these visionaries and thought leaders have ever had an original idea in their lives, they just ape eachother.
It’s not clear at all this is the wrong move.
Having to rewrite all my CI will suck but will be worth it.
there're different dimensions for "scale" - like handling large monorepos, orders of magnitude more commits, tighter requirements for latencies (for agentic use, e.g. for agentic history navigation)...
This one stood out to me:
>Machine-scale infrastructure. [...] Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. [...] Git itself is being reengineered for machine scale.
Git itself is so far down the list of bottlenecks that do or could hamper LLM-driven development, even projecting years into the future...
It makes you have 10x more the errors if you YOLO it ;) especially at a scale even remotely comparable to gitlab :/
Doesn't really inspire the greatest of confidences when they are literally dropping the ball on one of the greatest opportunities as github is being ensloppified.
Sometimes I wonder if I am more passionate towards my 7$/yr vps's and websites running on it than 7 billion $ companies (GitLab has a market cap or net worth of $4.36 billion. The enterprise value is $3.10 billion.[0] to be exact)
break things and move fast should work when you have 1000 users on your website, not 1000 full on entreprises (probably more for gitlab)
> I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.
> Are they going to rebuilt git??
These comments make me realize again how you all (who were alive ie) must have felt during the pets.com and dotcom mania. Some of these sentences are almost onion-video like titles. Its so all weird at a certain point. I am unsure how to feel about this.
My manager has started speaking like this. He showed a slide recently which had the words AI and Quantum nearby
Were I to have crafted this post, it would have included things like
"We ask our employees, customers and investors time to prove ourselves to you again as we re-commit to listening to our stake-holders and ensure our organization is properly re-positioned to execute our continued plans to deliver the best possible service..."
But instead it comes across as "someone read an article about Amazon's two-pizza team rule and we figured there were worse things to try."
> Interpersonal excellence: individuals who are good humans, embrace diversity, inclusion and belonging, assume good intent and treat everyone with respect
Really? In my experience it's the rank-and-file employees who have this knowledge of how to get on with it without ceremony and politics. And the broken processes and politics are created BY the middle managers.
For some people it might actually be worth it, not to solve anything but to talk to someone. It still sucks anyway.
The ball is right there, bouncing alone in front of the goal, and they just have to position themselves as "we're the stable ones" to score that market when the exodus inevitably happens.
Nope, full throttle and stimulants, just because.
Having been in some of these values meetings, I really imagine it went like this: someone wanted speed, and someone else wanted quality. Sorry, I mean Speed and Quality. Many people said there is a tradeoff between those two things, and only one thing can be first.
Some brilliant businessman: "I know, we'll combine them. We want Speed _and_ Quality." Thus, "Speed with Quality." Tada!
Values are a tradeoff: only one thing can be first. Trying to duck that is stupid.
My guess is they are doing this to prep for an acquisition. Probably by an AI company or Datadog or similar.
I never really got why they need to be a public company in the first place.
If gitlab thinks they are as famous as github i don't know what to say. They should have atleast positioned themselves as a better github alternative
bottom level teams are merged to form larger teams.
They simply don't have (or didnt) the skills to scale. THey were talking about using ceph to run things (which gives you an idea about how green their infra team was)
Models will only get better with time, not worse.
Demand will keep raising.
They don't cause the broken processes. They are the symptom of a broken executive process. A fish rots from the head down, and the people at the top get exactly the kind of company that they ask for.
I don't know, I've seen more big organizations that have a dysfunctional amount of middle management and "meetings about meetings" than ones that truly benefit from that culture.
We've been working through some significant changes inside GitLab over the past few days, and I want to share them with you directly. The email I sent the team is included below for full context.
The agentic era affords GitLab the largest opportunity in our history as a company, and we're making the structural and strategic decisions to meet it.
This letter has three parts. First, the operational and structural news, which is hard. Second, the strategic thesis we're betting on. And finally, what this means specifically for you, our customers and investors.
This morning we shared with team members that we're beginning a restructuring process at GitLab, and we're running it differently than most. The planning is happening openly, including a voluntary separation window. That creates real uncertainty for our team over the next few weeks, but we believe the outcome will be better for it. Where we can, we plan to finalize the new shape of the company on or before June 1. Where local requirements apply we will not make any changes until the local process is complete.
Four operational changes are part of the workforce reduction.
Operational changes and the update to our strategy are happening together: they are related but independent. Operationally, we grew into a shape that was right for the last era and isn't right for this one. The strategy below is what we're betting on next, and stands on its own.
We are reaffirming our Q1 and full year FY27 guidance today. The final scope and financial impact of the restructuring will be shared on our June 2 earnings call, once we’ve finished the plan and received approval by our board.
Underpinning the changes we’re making today, and our go forward strategy are 10 core beliefs that span the world we’re building for, the architectural bets we’re making and how we’ll deliver.
We’re evolving our strategy to optimize for the future state of software engineering:
Platforms that weren't built for machine scale are starting to break under it. Winning means investing in the fundamentals that really matter: security, performance, scalability, reliability and user experience. We're making five, fundamental architectural bets. Each one is underway and we plan to deliver without disruption to GitLab customers that depend on us every day.
For our customers, the most important thing today is what doesn't change. The support, roadmap commitments, contractual terms — all of it continues without disruption. Your account team is available to walk you through today's news if you'd like a conversation.
Where you should expect to see us evolve is in the quality, depth and pace of innovation we ship. We will lead the way in agentic engineering by being customer zero of our platform, demonstrating with our innovation and our results the success you can bet on as our customers. Our vision for the product and business model is clearer than it has ever been and we're accelerating the work. We'll share the next wave of our innovation roadmap at GitLab Transcend on June 10, 2026 and hope you'll join us.
Today's announcement is a deliberate move to lead in a market we believe is in the middle of its largest shift in twenty years. The opportunity here isn't incremental growth on a DevSecOps platform — we're building toward becoming the trusted enterprise platform for software creation in the AI era.
We look forward to sharing an update on the business and our Q1 results in our upcoming earnings call on June 2, 2026. We’ll also share the final scope and financial impact of the restructuring at that time, although we anticipate reinvesting the majority of savings into accelerating our progress against the specific growth and technological initiatives that we've outlined.
This is the most consequential work we've taken on as a company. We'll prove it in the innovation we bring to market, how we serve our customers, and how we create value for our shareholders over the near- and long-term.
Thank you,
Bill Staples CEO, GitLab
A letter to our team.
Today is hard. I want to acknowledge how difficult today is given the volume of change we’re asking you to take in, and the uncertainty of a transparent restructuring process.
We've spent three days together on the why, the what, and the how of where GitLab is going. This letter is the written summary, so you have something to reflect on as we navigate the coming week together.
This restructure process is not like others you may be seeing in the news. Of course AI is changing the way we work and is part of our transformation plan, but this is not an AI optimization or cost cutting exercise. We intend to reinvest the vast majority of savings back into the business to accelerate our unique opportunity in the agentic era as defined in our Act 2 Core Beliefs.
One way our restructure process is different is that we are doing it transparently and including every team member in the process. Starting today, managers across the company are entering deeper conversations with leadership about how the restructuring principles land inside their teams. Those conversations will inform the decision of impacted roles. The reason we're not landing the full decision today is that getting the shape of the next GitLab right matters more than getting it fast — and a transparent process with input from you, your managers, leaders across the organization, and our employee representatives is the best way to land this change with an organization ready to move forward.
As we discussed today, we are planning a workforce reduction driven by a concentration of our country footprint, flattening how we're organized, and role right-sizing designed to optimize the shape and size of our teams. In addition, we’re establishing a new set of operating principles, founded on a culture of excellence.
I want to be direct: I want to do this once, and do it right, and not revisit our structure anytime in the foreseeable future. The team that comes through this restructure is the team that builds Act 2, and you should be able to plan your life and your work without bracing for what comes next. Let’s talk about what’s changing and how we get it right.
Reduced operational footprint: We’re reducing our country footprint because operating in nearly 60 countries does not allow us to give every team member a great experience. We anticipate reducing the number of countries by 30% focused on geos where we have only a handful of people or fewer. Team members who are in good standing and would like to relocate are welcome to do so. We'll continue to serve customers in those markets through our partner network where appropriate.
Flatter organization: We’re flattening our organization because eight layers is too deep for a company our size and management layers are slowing us down. Every layer of management increases the number of places where priorities and communication gets filtered. A flatter organization will better connect every team member with leadership.
Role right-sizing: As we shift to a new strategy and way of working, powered by AI, we must revisit the size of staffing for each role to ensure we are optimizing for speed and customer outcomes. In some cases, AI can augment and accelerate what team members have been doing, in other places we need to expand certain roles to go faster. We do expect daily use of AI by every individual in the company and we are launching AI acceleration programs to support every role as part of our transformation.
We will be retiring CREDIT as our values framework. CREDIT was the right framework for the very successful Act 1 that took the company to $1B ARR. Those values shaped a company that thrived through COVID and our IPO to become one of the most recognized names in DevSecOps. We are not retiring them because they were wrong, we are choosing instead to focus on something different for this era which demands a different operating posture. Many of the same values we have been living and often talk about are still directly applicable in this era. Our three new operating principles are:
Speed with Quality means we move faster than we have, with the discipline that lets others rely on the work, especially our customers. We achieve this with smaller teams, tighter cycles, and stronger guardrails. We will hold a higher bar for what we commit to and what we deliver against those commitments. Here are some specific examples we shared today of what we expect every team member to embody:
Ownership Mindset means we expect every individual to act as a steward for the company and with autonomy. The people closest to the work make the decisions about it, and they own the result. Layers of management between leaders and the work coming out, and handoffs that dilute accountability are eliminated. Some examples of the mindset we expect every team member to embody:
Customer Outcomes means we measure ourselves by what changes for the customer, not by the activity on our side. Internal milestones matter only to the extent that they connect to customer impact. Examples of behaviors we expect from everyone:
These are built on a culture of excellence, which we expect every team member to uphold. That means:
Our transparent restructure process creates uncertainty that is real and it's hard, and I'm not going to pretend otherwise. I ask that you reflect on the why, what and how and engage your manager in a real conversation about the work, the questions and concerns you have, and what the next chapter looks like for you. Your manager may not have all the answers, because they too are going through this period of uncertainty. The conversation still matters and your input shapes how we land as a team.
The voluntary window exists for you. After three days walking through Act 2 together, you have the picture you need to decide whether GitLab is the right place for you in the next chapter of your career. If it isn't, talk to your manager or director and, where local requirements allow, apply for a separation before May 18. If approved, we'll include you in the same separation package as anyone else. The approval process exists because individual circumstances and local requirements vary and have to be weighed case by case. This process is meant to provide something we all deserve once the restructure is complete: a team that is excited and committed to the future of GitLab. Please take a moment to listen to what Sid, our founder and Exec Chair, thinks about the changes we’re making today.
I want to spend the rest of this letter convincing you to stay, if the “Why” and the “What” sessions haven’t already convinced you.
Better employee experience. Our overriding objective is to bring a significant improvement to the joy and impact of each team member participating in Act 2. We know that by doing that, we can better capture the creativity and impact of every individual and build a world class business.
Better pay. Once approved, our new bonus program will give every team member who isn’t on an incentive compensation plan or bonus plan today, the opportunity to earn a cash bonus based on their individual performance, targeting 10% of salary, awarded at their manager’s discretion.
Smaller, empowered R&D teams with a clear vision. We aspire to double the number of smaller, R&D teams - up to 60 - with more autonomy and ownership.
Less friction, less overhead. The handoffs that have slowed us down are going to be significantly reduced. The layers between you and the decisions that affect your work are being reduced. If you've ever been frustrated at GitLab by how long it took to get something obvious done, Act 2 is engineered around removing that friction.
Solve big technical problems. Our five architectural bets provide deep, technical problems that will redefine GitLab for the agentic era, including a new git for agents that supports machine scale, an orchestration layer for humans, agents and full lifecycle orchestration, a connected graph of full lifecycle data as a service, brand new policy service to provide centralized governance and a fully autonomous software engineering experience.
More flexible buying programs. Our new consumption buying programs will make it far easier to sell GitLab and for customers to buy GitLab seats + credits and unlock adoption faster than ever before.
Career growth. Bold bets like Act 2 are rare and bring with them opportunities for every team member at every level to learn faster and develop skills and experience that will matter for the rest of your career, here or wherever your path takes you.
Aligned leadership with the will to win. We have a leadership team with e-group, and our SLT, that is committed to win, make the hard decisions and align the organization cross functionally to accelerate results. We will hold ourselves accountable to help you succeed and create a winning organization.
Uniquely positioned to win. We are uniquely positioned to not only participate, but to lead in our category where the TAM is exploding at a step function rate. We have structural advantages in data, technology and customer trust that give us an advantage over AI labs and start-ups that we can harness to redefine how software is built in the agentic era. By being part of Act 2, you will be part of a winning organization that helps shape software engineering in the agentic era.
Whether by choice or otherwise: the work you did here mattered, and it continues to matter. You came to GitLab when it needed you. You built things the next chapter is built on. We owe you real support through the transition, and our genuine respect. If we're asking our team to be world-class, we have a reciprocal obligation to be world-class in how we treat people leaving us. That's the standard we're holding ourselves to.
–
I'll close with this. None of what I've written makes today easier. It isn't supposed to. What I want you to know is that we've made these decisions carefully, our intention is to make them only once, and we're going to do right by the people leaving and by the people staying.
Thank you for what you've built. Thank you for what comes next.
Bill Staples, CEO, GitLab
Every IC ought to use the present day as the opportunity to build a nimble competitor to their old employer (or whatever industry incumbents they want).
They're literally setting themselves up for this.
We've all heard the joke about two people running from a bear and only one has to be less eaten than the other.
This is a race to the bottom. We shall see who winds.
"So...you decided to throw away what distinguished you from your faster, more stable competitor?"
(Source: I build tooling around Claude Code and have spent hours swimming in the GitHub issues based on downstream user feedback)
So many things they could be doing, to make people buy into their services. For example they could simply run campaigns about how they promise to never use customer and user repositories for AI training. Or they could show better uptime statistics. Their CI language is better than Github's too.
If anyone gave me a choice between Gitlab and Github, I would go with Gitlab. But if I had additionally the choice to use Codeberg, I would choose that.
Maybe they are just not looking to grow. If they made such a statement, that would actually be a pleasant surprise. No hunger for "infinite exponential growth", just to impress investors? Great! That's a fat plus in my book!
I understand the meaning, however, in that they're well positioned by having the company name and domain name, same general way that non-technical people will pay wordpress.com to host their blog/small website because it's very easy, rather than DIYing it or paying a 3rd party.
Their pitch is not to you, the dev. But, to the investor class. We are in this funny place in the market where you can make more money by catering to the investor class than to customers. In other words, an upside down world.
Also "our velocity is 3x higher than it would be in the imaginary invisible universe where we made worse decisions 6 months ago" is impossible to measure, whereas "we cut a bunch of corners and shipped a piece of garbage on an arbitrary deadline" is very measurable.
Let's pick: Speed-Quality
Errrh... Let's forget about: Price
I wonder if they have 5-10 employees per manager at the bottom of the org chart, but a lot of middle managers and manager-like titles mixed through the middle.
Tons of middle management that makes no decisions what so ever.
Everytime you ask a question, they delegate, until you end up at person 1 again and they just can't decide anything.
It's like they all have decision paralysis.
It that scrapes Hn it works. Ironically, it's why I'm here.
This, like virtually all layoffs, is for economic reasons. Of course you can't say that because that reflects poorly on your growth and makes your investors uneasy and yadda yadda yadda. But what do investors like? Hm? AI!
Oh! Oh!!! This is strategic, you see, so we can use even more AI, yes yes that's right mhm.
Gitlab pricing was bonkers. It always felt like their sales team were trying to play gotcha with us over the years with pricing schemes that would milk us for money.
I'm not saying you should never self-host your git server, but it's not for everyone.
Eight layers total
What Gitlab is announcing here is that employees need to apply for a separation, at a yet-to-be-determined time under still-unknown terms, without a guarantee of acceptance, in the next 7 calendar days. Much different and just so much worse.
The mediocre people who dread looking for a new job during a hidden recession aren't going to leave. They can't afford the risk of not being able to find a new place of employment before the severance pay runs out.
https://docs.github.com/en/enterprise-cloud@latest/admin/dat...
Its slow, large, excessively complex and not that resilient to failure.
You either want a bunch of NFS machines backed on to ZFS on nvme, with a central jumping off point that allows sharding (this is critical to allow one or more NFS server to fuck up and not kill access to everything else.)
Or, pay the money and use GPFS
I feel like that overstates the point quite a bit. There's a lot that's similar: neurotransmitter release is stochastic at the vesicle level, ion channels open and close probabilistically, post-synaptic responses have noise. A given neuron receiving identical input twice doesn't produce identical output. Neither brains nor LLMs have a central decider that forms intent and then implements it. In both, decisions emerges from network dynamics, they're a description of what the system did, not a separate cause (see Libet's experiments).
Now pretty clearly there's a lot that's different, and of course we don't understand brains enough to say just how similar they are to LLMs, but that's the point: it's an interesting thought experiment and shutting it down with a virtual eyeroll is sad.
I claim that a modern frontier LLM can be given simple instructions that make it impossible for a person to reliably distinguish it from a person over a bidirectional text-only medium.
they do on the org level. that's not news for anyone who has worked at upper mgmt level in corporations. rule no.1 is you keep your mouth shut about anything there. and of course it's for economic reasons.. it's a business, not a charity to provide lifelong employment for employees who aren't aligned to mgmt goals. Mgmt tells stories depending on who asks. Levels below execute them (by identifying those who aren't aligned).
It's unlikely, but not totally improbable - Model collapse means that the subsequent models would get worse over time, not better.
1. AI free training sets no longer exist. This might degrade quality, although some claim that it will not.
2. Cost. Right now they are burning a lot of money to convince people it's good. But they might not be able to keep it up forever and need to increase prices (which few will want to pay) or degrade the quality to save money.
Arguments against self hosting have to change as our SaaS overlords are decaying in front of our very eyes.
I have to regularly use Azure DevOps and the whole platform is painful, and now is rotting on the vine. I hear there is internal strife at Microsoft between Azure DevOps and GitHub products.
The GP miscalculated it.
If anyone has a VP-level position open, I'm willing to send you my resume. There is a salary level at which I am willing to do work entirely without shame.
You are obviously right and I see examples of it everywhere.
E.g I asked Claude opus 4.7 (the latest/greatest) the other day “is a Rimworld year 60 days?”. The reply (paraphrased) “No, a Rimworld year is 4 seasons each of 15 days which is 60 days total”.
Equally, it gets confused about what is a mod or vanilla since it is just predicting based on what it read on forums, which are clearly ambiguous enough (to a dumb text predictor).
image: https://mataroa.blog/images/b5c65214.png
but it says that there are 3 e's in strawberry ;)
Now this is literally something which occurs because of it being text autocomplete and the inherent issue of token based Large language models. So you are literally right :D
My point is that AI can have its issues and it can have its plus points (just like text autocomplete but some suggest its on steroids)
The issue to me feels like we are hammering it in absolutely everything and anything, perhaps it should be used more selectively, y'know, like perhaps a tool?
I get self-hosting got for security, compliance, and retention reasons, but for almost everything else it seems questionable for any use I would consider normal.
"Editions There are three editions of GitLab:
GitLab Community Edition (CE) is available freely under the MIT Expat license. GitLab Enterprise Edition (EE) includes extra features that are more useful for organizations with more than 100 users. To use EE and get official support please become a subscriber. JiHu Edition (JH) tailored specifically for the Chinese market."
Personal opinion, but I think a great deal of the people who are presently overloading github with one person created vibe coded projects would be just fine with the "CE" feature set.
It's not that different from making it part of the process in the first place.
It just made me realize why I don't have those found memories of my mom's cooking. When we got our first microwave she went full on the vibe cooking and took years to realize how dumb it was.
I hope my kid doesn't get the same kind of memories about my weekend projects.
Neither of these groups are valuing long term expertise
The mallard reaction is very possible in microwaves, but they use microwave-specific crockery. I think the vision was possibly killed by people not wanting to maintain a second set of crockery.
See here for a fun write-up: https://www.lesswrong.com/posts/8m6AM5qtPMjgTkEeD/my-journey...
If pointing out the flawed approach to making something more productive isn't productive, then what do you consider to be productive?
> Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction
Cobol was sold to people on the idea that anyone could create something with fuzzy human readable description that would result in executable code. That was back in the 60s.
What lessons did we learn?
1) Leaving things to the people who make fuzzy human readable descriptions turns out to be a terrible way to have things implemented.
2) Slowly and deliberately thinking things through before, during, and after implementation always leads to better results.
It's a lesson that keeps needing to be re-learned by people who don't/can't look at things through a historical lens.
It was the same with cobol, as it was with programming in spreadsheets in the 80s, as it was with the nocode movement in the 00s, as it is now again with LLMs in the 20s, and it will be again with a future generation in the 40s.
---
> As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum.
Long form text generation that is hard to distinguish from human authored text also goes back to the 60s.
That's when we got the first instances of the Eliza effect.
> You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology
The capabilities we've seen are:
- Text prediction/generation
- Inducing the Eliza effect
LLMs are the most successful form of neural network we have, and that's because they are token prediction machines. Token predictors are easy to train because we're surrounded by written text - there's data nicely structured for use as training data for token prediction everywhere, free for the taking (especially if you ignore copyright law and robots.txt and crawl the entire web).
We can't train an LLM to have a more complex internal thought loop because there's no way to synthesize or acquire that internal training data in a way where you could perform backprop training with it.
Even "train of thought" models are reducing complex thoughts to simple token space as they iterate, and that is required because backprop only works when you can compute the delta between <input state> and <desired output state>. It can't work for anything more complicated or recursive than that.
I think, therefore I am. You parrot, therefore you are... ?
Yes. A RimWorld year is 60 days, split into four 15-day quadrums (Aprimay, Jugust, Septober, Decembary), each corresponding to a season.Your attempt at an analogy will make sense when someone tries to install a house as middle management at some company.
To believe that first you would have to ignore tool calling, ReAct loops, and the whole agent feature. That would be silly.
Gemini: There is *1* "e" in the word "strawberry".
Seems fine
This is like saying that somebody speaking Chinese is just playing the Chinese Room [1] experiment. The only reason it's less immediately obviously absurd here is because the black box nature of LLMs obfuscates their relatively basic algorithmic functionality and let's people anthropomorphize it into being a brain.
The American corporation and its values are anathema to craftsmanship. You can ******* a **** all you want, it's never going to turn into gold, but your hands will be covered in crud.
I find it a bit concerning that this piece focusses so much on customers and shareholders... I know I don't pay, but perhaps sometime I will, and I am learning GitLab and applying at large orgs as GitLab consultant. All because of CE... So I hope it will stay. It is a nice and very complete on-ramp to EE.
Perhaps we can liken these auxiliary advances to agents and harnesses in the analogy. In the end, despite the unbridled optimism from certain backers, we never solved the fundamental issue with microwaves: that they use electromagnetic waves for cooking, and that electromagnetic waves have certain undesirable properties for this application.
[0] https://americanhistory.si.edu/collections/object/nmah_10880...
Understand that a lot of people don't have a lot of choice but I use mine (actually have a 4 in 1 when I had to replace the old one after it burst into flames and that's somewhat useful as a second oven).
Point being, which you know and are being willfully ignorant about, is that it’s more complex than that. And you’ve neatly discarded the detail that they’re multi modal.
I will freely admit though, analogy is useless when interacting with someone who has already made up their mind.
> Mark this prediction it will happen
But this historically is a very strong predictor of a poor prediction
See: https://fediverse.zachleat.com/@zachleat/116529994444529036
The problem is they’ll do what you ask. And if you are the type of non-curious person who replies “ Autocomplete only 'knew' how to output a scraper...”, then you’ll tell it to make you a scraper instead of ask what your options are for getting HN data.
This is not quite accurate. The human lips, throat etc have evolved to be better at producing speech, which indicates that it's not that recent. And that it was a factor in the success of groups who could do it better than others.
It likely started "no later than 150,000 to 200,000 years ago."
sources:
https://en.wikipedia.org/wiki/Origin_of_speech#Evolution_of_...
How?
It all still functions with text prediction
Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
Can you imagine how silly they’d look when everyone realised.
Those literally work with text prediction.
If you take the text prediction out of it, nothing happens.
You stick a harness around a text predictor which then triggers the text predictor.
If you think I am missing something then please do point it out.
>> Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
Woof, you're sounding mighty aggressive for someone with such a fundamental misunderstanding of the technology you are defending. Have you ever even actually implemented a system around an LLM, or do practice ~~voodoo~~ "prompt engineering"?
> I can point you to ReAct loops and tool-calling and agent-based systems.
Those are all implemented - quite literally - by parsing the *text* that the LLM *autocompletes* from the prompt.
Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call. The response is then appended to the ongoing prompt, and the LLM is called again to *autocomplete* more output.
"ReAct loops" and "agent based systems" are the same goddamn thing. You submit a prompt and parse the output. You can wrap it up in as many layers as you want but autocomplete with some additional parsing on the output is still fucking autocomplete.
If you're going to make such strong assertions, you should understand the technology underneath or you'll come off looking like a idiot.
No. Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
Yeah, but fundamentally all of this is implemented as next token prediction, given the context (which the tool results are).
Honestly, it's pretty amazing how much we can do with next token prediction, but that's essentially all that's happening here.