When I first started the environment you used depended entirely on language. In the C++ and Python space, there was the vim and emacs divide. With Java it was more complicated. Some still used vim/emacs but a lot of people used Eclipse.
Now Eclipse was a real problem at Google because of the source control system. Java IDEs are primarily built to import binaries, specifically jars. In the outside world, these dependencies are managed via Ant (very early days), Maven/Gradle or the like.
At Google there's a mono-repo (Perforce/Piper) and you check out parts of it locally and rely on the rest via a network connection (to SrcFS IIRC, it's been awhile). This was neat because you could edit a file locally and the dependencies would just recompile (via Blaze).
So for Eclipse a whole lot of initialization had to be done and the IDE would fall over. A lot. It had a team of ~10 working on it at one point. Then somebody did a 20% project called magicjar. Magicjar took a Perforce client and built all the dependencies as jars that could be imported directly without parsing the entire source tree (which was usually huge). This made it possible, even preferred, to use IntelliJ, which is what I did. Magicjar was great.
Other people actually made CLion work reasonably well with C++ too. That was nice. This was a much bigger undertaking with many more corner cases just given how C++ works (ie headers and templates).
So checking out a client was relatively heavyweight, even with a minimal local tree. And, if you worked on Google3, you had to do this a lot. You might need to do a config file change. This was the real starting point for Cider because it was way nicer to do config file changes with it.
Obviously I don't know where all this went from there. VS Studio as a Cider frontend? Ok, that was news to me. Engineers being unhappy when things change and when the slightest thing works differently is the least surprising thing I've ever heard.
Oh it's worth adding that in my time many people didn't use Perforce (P4) directly. They used somebody else's project, which was a Git frontend for it, called Git5. I believe it was already being deprecated while I was still there. But Git5 modelled a P4 change as a branch so you could play around with your Git commits locally and then squash them into a single P4 change. I actually liked this a lot.
https://knowledge.workspace.google.com/admin/gemini/ai-ultra...
The days of using Eclipse were particularly bleak. These days I use Antigravity for the overwhelming majority of my work.
The aspect I miss is the distributed compilation hinted at in the article. I remember back at the end of 1990s using distcc and things, but that never seemed to happen in the Java world and the tooling like maven etc is structured to make everything one long dependent chain. Shame.
The article is framed around "all Googlers" but there is still a very large contingent of Googlers who cannot use these tools.
When Google wanted engineers to use AI features, it turned them on in Cider-V by default. And if you turned them off, later updates would turn them back on. This is very good for your adoption metrics, but might not tell you exactly what you want to know about engineer happiness.
Such a dominant IDE also allows management to ignore the long-tail of users who aren't using it.
I do think Google will continue to get results out of their tooling, as long as they are investing in the tooling. But that is not zero cost. Is it worth it for what they are doing? Largely seems to be.
But it isn't like they are that much more successful at software projects than any other company? They are still largely an ads company, no?
I'd like to hear the perspective of the developer/user; the IDE provider has some incentive to take credit and imply high utilization reflects success rather than Google policy.
I'm interested in how tooling conditions developer expectations more broadly. I'd love to see a comparison of Linux OS development (all local+open+git, open but contributor hierarchy) vs Google (monorepo+required tooling, pre-allocated authority) from someone who's done both.
I don't know which team that was, but to add to that, official support for IntelliJ at Google started quite a bit earlier. I was the second person to join a team writing IntelliJ plugins. We wrote a Blaze plugin not too long after Blaze launched, as it was becoming more popular.
Google tells me that Blaze launched in 2006, so I think it must have been 2007 or 2008.
I guess maybe it was fancy back in mid 2010s, but my experience was a couple of years ago.
My recollection from 2009-2011 is that emacs and vim were the dominant editors (just as the TV show Silicon Valley depicted), and there was a decent-sized minority using Eclipse and Intellij, both of which had official support for Google tooling. The command line still largely ruled though, even though the official Google developer workstation was Goobuntu, Google-flavored Ubuntu. This reflected the overall developer population of the time.
I think Cider actually was invented a little earlier than the article describes. I have vague memories of some engineers experimenting with web-based IDEs that would integrated directly with Critique (the code-review software) as early as 2013-2014. Its use was not widespread when I left in 2014; there was still the impression that it wasn't powerful enough for daily driving.
When I came back in 2020, emacs/vim use was much lower, again probably reflecting differences in the general population of developers. Many more of the developers had been trained in the post-2010 developer ecosystem of VSCode, IntelliJ, etc, and this was reflected in tool usage at Google too. I'd say IntelliJ was the dominant IDE, with Cider a close second and Cider-V just starting to take market share. You still had to pry emacs and vim from a grizzled old veteran's hands.
By 2022 I'd transferred to an Android team, and Android Studio with Blaze was the dominant IDE, even as general IntelliJ usage in the company was falling. Cider just didn't have the same Android-specific support. Company-wide Cider-V was growing the fastest, taking market share from both IntelliJ and Cider-V.
By 2024 Cider-V was dominant and there started to be a concerted push to standardize on it, particularly since new AI agent tools were coming out and they couldn't be supported on all editors that Googlers wanted to use.
As of my departure in 2026, the company-wide push was to standardize on Antigravity [1], which, as I understand it, won a turf war within the developer tools org and got blessed as the "official" Google AI coding agent. This also has the effect of concentrating developer time dogfooding Google's external AI coding offering, which hopefully should improve its quality. There's still significant Cider-V usage, but it's dropping, and execs are pushing Antigravity hard.
It's also nice that it stores all my preferences in the cloud, so switching machines is seamless (helpful when my macbook broke a couple weeks ago and I had to use a loaner chromebook for a day).
It's also well integrated with google3 and codesearch, and seamlessly runs tests on remote machines with tmux integration and all.
Not all of google tooling is my favorite (like their source control), but the IDE is great.
Pair programming was very in vogue and I used to get in a little later than some which was a great excuse to just hop on someone else’s machine who’d already gone through that pain
They have a ton of other software in 2026. And they have a pretty diverse (and diversifying) income stream today. Like 30-40% from non-ads.
Is it worth it? That’s for them to say, but they can ramp up cloud services at scale pretty fast as a core competency.
Sure, the money is mostly in ads, but serving searches, AI, youtube, and all the rest at the scale Google does it requires a technical tour-de-force. Does Google do it better than everyone? Absolutely not. But it does it better than many.
Certainly it isn't the _only_ way to do it--other companies also manage to do it. But not all that many at the same scale. It's an existence proof that you can.
I re-read this several times trying to figure out where the irony was hidden. But... it's not there?
So I know what others spend and were spendingin similar environments in terms of actual dollars, and where it roughly goes.
So let me say - it was not a small investment, in part because the all-in costs of engineers are very different. I'm really unsure why you would think otherwise.
Unlike others, Google is also remarkably good at quantifying the actual value something provides in developer productivity/etc. Most engineers handwave this tremendously. Google has an amazing amount of telemetry. So i laugh when you talk about "the leverage over developer productivity" because the vast majority of companies i've worked at or talked with have almost no useful idea about their developer productivity (IE can't even account for the majority of their developers time at work), or how to invest effectively to do something about it. They can often account for <30% of time developers are spending at work, etc.
As for perspectives - there is plenty of sentinment and other data. Cider is overall one of the top 5 most loved tools at Google, and had well over 90% developer satisfaction IIRC.
As the team had to collaborate with the VSCode team, we got clearance for sharing information about it. The screenshots in the article were posted publicly on GitHub (in vscode issues). You can also find screenshots in https://research.google/blog/smart-paste-for-context-aware-a...
More generally, a lot has been communicated on developer infrastructure at Google.
You are talking, i believe, about the support for blaze builds in intellij, which was fairly early on, as you point out.
I suspect Laurent is remembering some of the google3 mobile/android efforts, which were much later.
This is just on the "java" side, too. There were other plugins being built that were fairly specific to google3 support.
There is a similar internal product but the agentic part is shared between that and Cider.
Now, ironically with so many extensions and LLM computing, users seem to forget that they chose Cider because of its lightweight.
Code references are less important inside Google editors, because we have a code viewer tool inside the web browser.
Most people read, explore, follow references, and share permalinks to the view-only tool. It’s a lot better than viewing code in GitHub. It’s super fast, is connected to language servers and can actually trace referenced, and overall has a million little features optimized for reading code.
We also have a code reviewer tool, and a separate tool to run and view CI runs.
So what’s left for the editor? Syntax highlighting?
I would tend to view code, run tests and CI, and review in separate tools specialized for their specific use case. The code editor was just a place where I would type in my changes.
I’d imagine this workflow feels weird to people who learned in one-stop-shop IntelliJ and GitHub world. But I can’t emphasize how much better these other tools were compared to GitHib. So a code editor that also lets me read, review, and test code didn’t really matter for me when I had a collection of smaller tools specialized for each individual task.
So, sure, lots of spots for software there. But still nothing that would make me think of them as a software company. Or, worse, a lot of software that I don't have a strongly favorable view on. :D
Consider that they spend more on trying to build up and support this central IDE than most companies dream of losing in productivity to not having this.
So, again, are they that much more successful at software than other companies? They have more hilarious flops than any other company.
Don't get me wrong. I still use some of the stuff. I don't hate them. I don't even think they are particularly bad at things. I just don't think they are any more successful than other software companies. Specifically at the software side of it.
May I ask, how are things going? Also, will your IDE always be focusing on transactional law or have you considered expanding to other legal areas and/or markets?
Very handy for seeing a problem, quickly solving it (sending out a CL) marking it autosubmit and just moving on.
For anything with native UIs, I suppose you could "remote desktop" into an app or a simulator running in the cloud but at that point you might as well run that locally and cut out all the issues introduced by networking.
I once worked at a place where VPs were looking at sprint burndown charts, and asked what happened if the line didn't look a lot like the line expected by JIRA. The telemetry is therefore often a curse, as any metric becomes a target. How many companies today have KPIs about having automated code reviews, which are then ignored by the devs, because said reviews are just wrong on almost everything?
The learnings of Seeing Like A State don't apply just to governments.
When it finally failed in the most annoying way possible (the touch screen, which I do not use, started creating phantom clicks in the upper right corner of the display) I went looking for another Chromebook that was light, powerful, and well-built. Finding none, I now use MacBook Air and weep for the time I lose every time it needs an OS update.
I'm a UXE, so I tend to use the same tools an external developer might. But I never got the impression that Cider was a recent development.
Git5 would copy some directories but builds would still fallback to files from the monorepo if you didn't track them. It was convenient for me since I could just grep and do fuzzy matching from my editor. Now I have to do some extra work to avoid grepping the entire monorepo. LLMs sometimes still try to grep the entire repo lol.
Now, you could use a perforace, mercurial, or jj interface and it works fine.
The history of Google's relationship to version control is even more interesting than editors - it went from CVS in 1998 to Perforce (P4) in 2000, then gcheckout and g4 in ~2006, then OverlayFS was invented in 2008, git5 came out in 2009, CitC obsoleted OverlayFS in ~2012, Piper built this all into the VCS in ~2013-2014, while I was gone from 2014-2020 apparently we got hg and jujutsu frameworks, and then when I got back in 2020 you'd just check out a .blazeproject from your IDE and everything would magically work. Many of these started as 20% projects (I used to have lunch with the guy who invented OverlayFS; interesting character and one of the best programmers I knew) and then got folded into the "official" way of doing things once grassroot adoption showed the execs that this was how people really wanted to work.
When this project got started, "VS code for transactional lawyers" was the target. We pretty well have that on offer at this point, but it sits in a weird spot making it harder to sell than it would be in, say, 2024. Right now, "AI forward" lawyers are spinning out of law firms in droves to start "AI native" firms backed for example by YC. They're so comfortable with Claude that they for the large part bypass a need for Tritium (or at least they think they do ;). OTOH, large law firms are inundated with legal tech products right now and have a hard time even understanding how an IDE benefits their lawyers. We're also trying to stay away from VC funding (other than from a certain awesome one ;), so we're missing a key signal for enterprise buyers. As I mentioned above, it's super hard to even set up a hands on demo because we have to get the desktop app installed on their infrastructure. But I'm shocked to learn that Googlers are happy to work in a browser, and distributing Tritium via browser is trivial, so we're going to 180 on that right here and now.
That all said, we eliminated the "free tier" as advised back in the Show HN thread, and we've managed to find a very small market in individual users. We're also finding some opportunities with the AI natives using an "unreal engine for legal tech" model that makes Tritium source available and handles the boring editor-related parts of their innovation.
I should probably do a post on this, but there's actually a topic we're working on that perhaps the HN audience will find even more interesting... coming soon!
[edit: I realized that I haven't responded to your question re: other markets, but accidentally did with the hint. We have some ideas.]
Our bazel system is full of custom skylark code so understanding the build means effectively reading a bunch of ad-hoc code written with varying degrees of competence and with confusing dependencies. I’m kinda ashamed I don’t have a deep understanding of a tool I use daily - but every time I try reading the documentation I quickly give up.
Sounds like all other editors were slow compared to Cider.
You have access to an extremely powerful remote workstation that from a UI perspective functions almost identically to a local workstation, via Chrome Remote Desktop. Plus, no one builds things locally, even on that machine. There is a huge, absolutely amazing distributed build system that everyone uses for everything. (Again, Android and Chromium are different.)
So you don't really need a powerful local machine. I held out for a long time--there were a lot of growing pains in the early days. But eventually it got really, really good.
Afterwards I was issued a 12" Pixelbook and it was surprisingly much more usable than I had expected! I could ssh into a Linux box for running builds and tests. Cider worked perfectly. It was snappy enough to serve as a thin client even on a 4K screen.
FWIW I don't think this is accurate (was kinda true in the 2010s?). I wouldn't be surprised if it's almost easier to get windows laptop than linux one now.
How is this enforced?
I’m well thinking I may as well trade my brick of an m5 pro for a 13” chromebook, it’s a strange time.
Duckie does still exist, and is probably one of the most used (and useful) AI tools at Google. Yes, it's just a Gemini wrapper with access to all the internal documentation. I wasn't doing daily development when I left so I don't know if it ever got into Cider-V.
This does exist. The network isn't the main problem. The Emulator has to run under nested KVM. That + graphics rendering on the CPU makes it not so responsive. It's useable enough in many cases though.
I think many VSCode users are not familiar with the Comments UI, but it's used in e.g. the "GitHub Pull Requests" extension. Apart from that, some changes in the list of directories/files (for performance reasons) and a redesigned SCM integration.
If you need to do development locally, you are either doing something very wrong or extremely specialized.
So there is effectively no motivation to copy the sources over. And because everything is on this distributed file system and built from it in a very bespoke environment, I would imagine (with no inside knowledge at all), that it is easy for auditors to detect when someone starts copying things out.
There is Jujutsu (with Piper backend) officially supported, and that is better than git. But of course, you will not be grepping the source code, there is code search for that.
https://www.businessinsider.com/sundar-pichai-wants-to-build...
And yeah, they did/do a lot through acquistions, but seems like most major companies screw up acquistions. Google has it's fair share of failed acquistions, but especially in the earlier half of the company's lifespan, they really did some great one: Youtube, Google docs, Nest...
maybe am biased, but have always thought Google in general does do it better than most tech companies. think it's their focus on the love of interesting ideas vs the love of money (although, that changes more and more as the company ages)
It's user-facing stuff may or may not be great--and the consumer level flops are legendary--but that is only the tip of the software iceberg.
Size has nothing to do with it.
One is a framework called Wiz, which renders the frontend for a bunch of Google web apps. You can imagine that the Wiz team might want to refactor an API, but not have to worry about different apps using different versions. In a monorepo, they can just find all the callsites and update them in the same commit that makes the API change. There's no package.json in google3 - everything builds from HEAD. Therefore, the commit that makes a breaking change is also the commit that fixes the would-be breakage.
This architecture evolved. Google used to use Perforce, which was a common commercial version control system before Git. Google had to figure out how to express the dependencies between packages in the monorepo (which can be in different languages with different build tools). They eventually created Bazel, which expresses those dependencies and orchestrates their build tools.
Build orchestration took a few attempts. Google3 is the third version of the monorepo, that is, the one that uses Bazel for dependency management.
Fun fact: This particular version of hg with its extensions actually originated from Meta.
> As I mentioned above, it's super hard to even set up a hands on demo because we have to get the desktop app installed on their infrastructure. But I'm shocked to learn that Googlers are happy to work in a browser, and distributing Tritium via browser is trivial, so we're going to 180 on that right here and now.
"Trivial" in the sense you can just compile everything to WASM? I'd be curious to know what such an IDE would feel like in the browser. I think the only WASM-based GUI apps I've tried in the browser were Flutter apps and those were… weird.
> I should probably do a post on this, but there's actually a topic we're working on that perhaps the HN audience will find even more interesting... coming soon!
I'll keep an eye out for the next Show HN! :-)
The second thing is distributed caching. Done right, not only are your test results cached, but CI's test results can be cached too.
The third thing is distributed builds. This only starts to matter in big projects, but compilation is inherently a spiky load and if you can share a big pool of compute between a big pool of engineers, you get higher hardware utilization and lower latency to build artifacts.
The fourth thing, something that isn't really feasible outside big tech, is you could be bazel all the way down in a big monorepo. One of the niftiest things at Google is to be able to put a printf inside a database server and run your client test, and blaze knows that it needs to rebuild the database server and it will do it automatically, so that you can get extra insight at almost any level in the stack.
Basically that company (a well known social media company, not FB) tried to implement everything on their own. Infra is their own (kinda makes sense because it is so huge), IDE is their own, communication is their own (which has an interesting feature that if someone screen shares an internal doc, other people can click a link to access that doc, too, very useful).
I was very jealous about their tooling team (that's what I call real programming), but nevermind I quit after a few months due to some unrelated reason.
I got burnt out after a while, so that kinda wrapped up my experience working on large repos.
And an enormous set of problems that must be managed. But multirepos have their own set of issues, and which set of problems you want is highly situation dependent.
I previously discussed how the main codebase at Google enforces strict tooling and conventions to allow the codebase to scale. For many years, there was one glaring exception: the IDE.
Context: I worked at Google from 2011 to 2024. Some of the information might be approximative, I’ll update it if there are reports. This blog post focuses on the main monorepo at Google (google3).
Like in many companies, engineers at Google have been able to pick their IDE of choice, and this resulted in a lot of fragmentation. In 2011, some of the most senior engineers were asked a question: “Is there a way to get a good uniform IDE for all Googlers?“ The answer was essentially “No”. Among others, Jeff Dean replied:
“Trying to get a group of developers to all agree on a common editor is a recipe for unhappiness. Everyone has different opinions about what is important here, and the advantages and disadvantages of different systems are weighed differently by different people. In the end, it doesn’t matter that much.”
This was the prevalent opinion for years. After all, it doesn’t matter which IDE your colleagues use, as long as their code is good. But I worked at Google for 12 years on developer tools, and I sometimes wondered about it.
If you look at it from a company productivity standpoint: you don’t want each engineer to spend too much time setting up their editor. Although engineers used different IDEs, useful integrations eventually had to be reimplemented everywhere: Bazel support, Starlark tooling, code formatters, code search integration, and so on. Google’s internal culture made this manageable. Engineers would often start tooling projects organically, others would discover them through the shared codebase and contribute. This kind of contribution is generally encouraged (through 20% time and peer bonuses). Critical projects would eventually become officially staffed. As an example, a team dedicated to the IntelliJ integration was formed around 2015.
Some people might wonder why you’d need a full dedicated team for this. Was the IDE not good enough in the first place? Part of the reason is that Google has a set of unique tools, and it just makes engineers more productive if you can give them a nice IDE integration. But also, some problems were caused by the sheer size of the monorepo. Traditional IDEs assumed that source code, build metadata, indexing and analysis all happened locally. At Google scale, that assumption starts to break down.
Around 2013(?), something happened that I hadn’t anticipated. Some people started building a web-based editor, named Cider. The name is a reference to “Cloud IDE”, with a trailing “r” to get a more memorable name.
In a company where most tools are web-based, where people spend time in their browser to do code-reviews, navigate the codebase using Code Search… in a company that uses Chromebooks, it actually makes sense to have a quick way to edit files from the browser.
What surprised me though is that Cider eventually became popular across engineers. At first, it was mostly used by technical writers who wanted to edit markdown files without having to deal with version control. The workflow was very efficient for fixing typos. In one click, you would send the pull request, with an option to automatically merge it once approved. Nowadays GitHub has this kind of feature too, but at that time, it felt new to me.
Over time the team added more and more developer-oriented features. The turning point came when they added support for code completion, through the language-server protocol.
Cider was a light client that opened much faster than traditional IDEs. All the magic happened on a backend that indexes the entire codebase, so that all the data was ready whenever someone opened the webpage.
Code intelligence requires connecting each identifier with its type and references. This forms a huge language graph that has to be updated at every commit. And well… the codebase receives many commits per second. But the IDE also needs access to historical data. If I’m working on a project and my colleague merges their code, I don’t want to pick up the changes immediately. So my editor needs to use the graph corresponding to my last sync date… augmented with my local changes, obviously.
With this kind of feature, the popularity of Cider continued to rise among certain demographics. For example, it was much easier to convince Go developers to switch than Java developers (because they expected a much more advanced editor). But the joy of searching and having cross-references across a billion files is real.
The investment in the backend could be justified: it was solving Google-specific problems and there was no good alternative to it. But the frontend felt quite limited: it was good for quick fixes, but it couldn’t compete with actual IDEs.
The direction changed in 2020, when I joined the team as one of the tech leads. At that time, Cider was the dominant IDE in the company and the question of its future came up. It was decided to use the VSCode frontend in Cider. It was a natural fit: VSCode was already dominating the IDE landscape, it was language-agnostic, extensible and built for the web.
By switching to the VSCode frontend, we inherited a mature editor, a large extension ecosystem and years of existing features. Many Cider feature requests were already solved problems in VSCode. More importantly, the extension system would unlock teams across the company and remove the Cider team from the critical path.
Screenshot of Cider V, 2022
Even with a dozen engineers in the frontend team, it took a couple of years to build a complete successor to Cider. In 2021, the open beta was used by 5000 engineers, but a lot of work remained to integrate everything and polish the experience. The team had to support version control; integrate the code review tool; provide code completion and refactoring features using the Cider backend; redesign the way extensions are shipped and updated; etc.
Many users were passionate and used to the Cider editor, and expected every little detail to be the same in Cider V. Small workflow changes or an extra click here and there may become an adoption blocker for some users. So the polish part of the project required months of iterations. Even color schemes generated an absurd amount of discussions. As Joshua Bloch observed back in 2011, “the only thing that generates more religious fervour than programming languages is text editors and IDEs.”
I could also write about the interactions with the VSCode engineers and how we contributed changes back to VSCode, but this blog post is long enough. I’ll try to write more about it one day. But let’s say that we had to maintain our local fork, update monthly, and we tried as much as possible to reduce our local hacks and align with the upstream code.
Design exploration for the code review integration, 2022
I started the blog post with a question about a “uniform IDE for all Googlers”. It didn’t completely happen but, by 2023, 80% of the development in the main Google codebase happened in Cider V (and the number kept increasing).
Each IDE has its pros and cons, but Cider attracted users by having the best integrations with the company tools, such as excellent version control support and a code review integration where the reviewer comments are shown inline in the editor.
What I found most exciting was the side effects of having most users using the same tool. It meant that we could invest more resources in the tool (because each change has more impact). I was tech lead for the IDE extensibility and, soon, teams across the company reached out and started developing their own extensions to improve their specific workflows. After two years, around 100 internal extensions were being developed. This enabled many scenarios that were previously infeasible.
In 2023, the management pushed all the teams to integrate more and more AI features. This led to cool features such as Resolving Code Review Comments with Machine Learning and Smart Paste for context-aware adjustments to pasted code. And of course AI code completion.
As more AI features are integrated into the IDE, the advantages of having a single, extensible platform become even more obvious. Of course, it was very expensive and very few companies can justify this kind of work. But I believe that the move to a “standard” (even if it’s not mandated) IDE has been very impactful.
In the end, standard tooling creates leverage.
Comments are closed, but feedback is welcome. You can discuss on Hackernews or Mastodon. If you like this kind of content, you can subscribe through RSS. To get email notifications, try a third-party tool like Feedrabbit.
"Hey, where's your tool's code in $MONOREPO?" "<path/to/stuff>"
Cool:
g4d my-citc-client # moral equivalent to `cd ~/repos/stuff`
blaze run path/to/stuff:target
... and you get a running version of whatever $stuff is, immediately built from head, quickly - no matter the set of dependencies, or which language they were built in. I can just try your thing out immediately with a common interface for all the builds, and I don't need to understand the build at all, unless or until I do, and then OK, absolutely every single build is always expressed in exactly the same way, same idioms, same patterns...AI is an odd example. For one, a lot of the research there is from acquisitions. Somewhat feeding back to my first point. They also were seen as tripping up on a lot of the current AI race, no?
There are many open source projects that are developed in google3.
I know how to _use_ bazel effectively to do my work. I'm comfortable with its well-designed surface but whenever I've tried to understand the inner machinery I've given up - especially when presented with a bunch of custom skylark rules code.
It's like an anti-git in some regards - the surface of git (the CLI) is an abomination in many ways but the the mechanics of the tool are so ingrained and the model is so clear and simple - I never feel uncomfortable.
I've a need to have some comprehension of the inner machinery or the underlying model of my tools.
First, that's just not true. Their biggest products by revenue (search/adwords) and biggest stock value driver (AI/Gemini/Datacenters) are clearly in-house creations.
But even then, the two biggest "acquisitions" you're probably thinking of are YouTube and Android, acquired in 2006 and 2005 respectively. What fraction of the software base of those products do you think has survived the intervening two decades? To be blunt: most of the software being shipped out of those groups is being authored by engineers who couldn't even read when the ancestral code existed outside of Google.
Honestly the "acquisition" thing is just a cope meme promulgated by Apple stans, as it were. It's not a serious point.
Do these also take a lot of effort to keep going? Absolutely! But that doesn't change that they acquire a ton. They just acquired Wiz this year.
I do question a lot of the focus on a unified IDE when it comes to this strategy. It is not surprising that there is a specific "discontinued google acquisitions" page in wikipedia with that in mind.