Building the Linux kernel with LLVM: https://github.com/ClangBuiltLinux/linux/issues
LLVM itself: https://github.com/llvm/llvm-project/issues?q=is%3Aissue%20s...
I'm a bit shocked that it would take significant effort/creativity for an MIT grad with relevant course/project work to get a job in the niche
I would have thought the recruiting pipeline is kinda smooth
Although maybe it's a smaller niche than I think -- I imagine compiler engineers skew more senior. Maybe it's not a common first or second job
I graduated at the bottom of bear market (2001), and it was hard to get a job. But this seems a bit different
Which is to say that all it takes is an interest in compilers. That alone will set you apart. There's basically no one in the hiring pipeline despite the tech layoffs. I'm constantly getting recruiting ads. Current areas of interest are AI (duh) but also early-stage quantum compute companies and fully-homomorphic encryption startups. In general, you will make it farther in computer science the more niche and hard you go.
Doesn’t need to be a YT channel, a blog where you talk about this very complex and niche stuff would be awesome for many.
That bit was heartbreaking to me too. I knew the economy was bad for new grads but if a double major from MIT in SF is struggling, then the economy is cooked.
Beyond that, I've definitely interviewed people who seemed like they could have been smart + capable but who couldn't cut it when it came to systems programming questions. Even senior developers often struggle with things like memory layouts and hardware behavior.
Compilers are just programs like anything else. All the compiler developers I know were trained up by working on compilers. Just like people writing B2B ecommerce software learned how to do so by working on B2B e-commerce software and embedded software developers learned how to do so by working on embedded software.
Heck, a typical CS degree probably covers more of the basics for compilers than B2B ecommerce software or embedded software!
Anyways, the "Who .. hires compiler engineer?" section is fairly vague in my opinion, so: AMD, Nvidia, Intel, Apple, Google definitely hire for compiler positions. These hire fairly 'in-open' so probably the best bets all around. Aside from this, Jane Street and Bloomberg also do hire at the peak tier but for that certain language. The off beat options are: Qualcomm, Modular, Amazon (AWS) and ARM. Also see, https://mgaudet.github.io/CompilerJobs/
I seriously attempted getting into compilers last year before realising it is not for me but during those times it felt like people who want to be compiler devs are much much more in number compared to jobs that exist (yes exist, not vacant).
The common way to get going is to do LLVM. Making a compiler is great and all but too many people exist with a lox interpreter-compiler or something taken from the two Go books. Contributing to LLVM (or friends like Carbon, Swift, Rust) or atleast some usage experience is the way. The other side of this is doing GNU GCC and friends but I have seen like only one opening that mentions this way as being relevant. University level courses are rarely of any use.
Lastly, LLVM meetups/conferences are fairly common at most tech hubs and usually have a jobs section listing all requirements.
A few resources since I already made this comment too long (sorry!):
[0]: https://bernsteinbear.com/pl-resources/ [1]: https://lowlevelbits.org/how-to-learn-compilers-llvm-edition... [2]: https://www.youtube.com/@compilers/videos
> I’m not involved in any open-source projects, but they seem like a fantastic way of learning more about this field and also meeting people with shared interests. I did look into Carbon and Mojo but didn’t end up making contributions.
This sounds like the best way to learn and get involved with compilers, but something that's always been a barrier for me is just getting started in open source. Practical experience is far more valuable than taking classes, especially when you really need to know what you're doing for a real project versus following along directions in a class. Open source projects aren't usually designed to make it easy for anyone to contribute with the learning curve.
> So how the hell does anybody get a job?
> This is general advice for non-compilers people, too: Be resourceful and stand out. Get involved in open-source communities, leverage social media, make use of your university resources if you are still in school (even if that means starting a club that nobody attends, at least that demonstrates you’re trying). Meet people. There are reading groups (my friend Eric runs a systems group in NYC; I used to go all the time when it was held in Cambridge). I was seriously considering starting a compilers YouTube channel even though I’m awkward in front of the camera.
There's a lot of advice and a lot of different ways to try to find a job, but if I were to take away anything from this, it's that the best way is to do a lot of different meaningful things. Applying to a lot of jobs or doing a lot of interview prep isn't very meaningful, whereas the most meaningful things have value in itself and often aren't oriented towards finding a job. You may find a job sooner if you prioritize looking for a job, similar to how you may get better grades by cramming for a test in school, but you'll probably get better outcomes by optimizing for the long term in life and taking a short term loss.
Hands-on balanced with theory.
We need more compilers (and interoperability of course) and less dependence on LLVM.
Compiler development is (for better or worse) a niche that favours people who've got real-world experience doing this. The traditional ways to get in have either been through high-quality, high-profile open-source contribs, or because your existing non-compiler-dev job let you inch closer to compiler development up until the point you could make the jump.
As the author noted, a lot of modern-day compiler work involves late-life maintenance of huge, nigh-enterprise-type code bases with thousands of files, millions of LOC, and no one person who has a full, detailed view of the entire project. This just isn't experience you get right out of school, or even a year or two on.
Honestly, I'd say that as a 2023 grad with no mentors in the compiler dev space, she's incredibly lucky to have gotten this job at all (and to be clear, I hope she makes the most of it, compiler dev can be a lot of fun).
...And honestly it seems that I'm screwed. And I need about 6 months of study to learn all this stuff. What I'd do right now is finish Crafting Interpreters, then grab that other book on Interpreters that was recommended here recently[2] and written in Go because I remember it had a followup book on Compilers, and THEN start going through the technical stuff that Rona suggested in the article.
And my interview is on Monday so that's not happening. I have other more general interviews that should pay better so I'm not too upset. If only I wasn't too lazy during my last position and kept learning while working. If the stars align and somehow I get that Compiler Engineer position, then I will certainly reach out to Rona and thank you again lalitkale for sharing this post with HN!
Damn, you don’t hold back, do you?
Just search for either of the words "Triton", "CUDA", "JAX", "SGLang" and "LLVM" (Not LLM) and it filters almost everyone out on "Who wants to be Hired' with 1 or 2 results.
Where as if you search "Javascript", 200+ results.
This tells me that there is little to no interest in compiler engineering here (and especially in startups) unless you are at a big tech company or at one of the biggest AI companies that use these technologies.
Of course, the barrier is meant to be high. but if a recruiter has to sift through 200+ CVs a page of a certain technology (Javascript), then your chances of getting selected against the competition for a single job is vanishingly small.
I said this before and it works all the time, for compilers; open-source contributions to production-grade compiler projects with links to commits is the most staightforward differentiator and proof one can use to stand out against the rest.
The world needs maybe what, 5000, 10000 of these people maximum? In a world with 8 billion people?
I would like to read in the future about what is the usual day of a compiler engineer, what you usually do, what are the most enjoyable and annoying tasks.
>In 2023, I graduated from MIT with a double major in math and computer science.
At least up until 5 years ago, the bar to join compiler teams was relatively low and all it required was some demonstration of effort and a few commits.
(Disclosure: am retired now)
There are three versions (C, ML, and Java). The language isn’t all that important; the algorithms are described in pseudo-code.
I also find the traditional Dragon Book to be somewhat helpful, but you can mostly skip the parsing/frontend sections.
Definitely worth some self-study, however, if only for the healing effect of being exposed to a domain where the culture is largely one of quality instead of...everything except that. :)
Looking through the domains in the LLVM mailing list or the git commits should get you a longer list of "off beat" options.
- multiple large companies contribute to each of the larger AoT compiler projects; think AMD's contributions to LLVM and GCC, and multiple have their own internal team for handling compiler work based on some OSS upstream (eg Apple clang)
- various companies have their own DSLs, eg meta's Hack, the python spinoff Goldman has going on, etc.
- DBs have query language engineers which are effectively compiler roles
- (most significantly) most hardware companies and accelerators need people to write toolchains; Triton, Pytorch, JAX, NVCC, etc. all have a lot of people working on them
But writing smart contracts and whatnot directly in bytecode sucks; so, you make a compiler so you can write them in an (invented) higher-level language. For which you might as well hire a "compiler engineer" as any other kind. :)
- most enjoyable: fiddling with new optimizations
- least enjoyable: root-causing bugs from customer crash stacks
Harder because the bar is really high.
No, LLMs are not going to replace compiler engineers. Compilers are probably one of the least likely areas to profit from extensive LLM usage in the way that you are thinking, because they are principally concerned with correctness, and LLMs cannot reason about whether something is correct — they only can predict whether their training data would be likely to claim that it is correct.
Additionally, each compiler differs significantly in the minute details. I simply wouldn't trust the output of an LLM to be correct, and the time wasted on determining whether it's correct is just not worth it.
Stop eating pre-chewed food. Think for yourself, and write your own code.
It has never been a huge niche. It's fun work if you can get it. There were often MIT grads around, but I don't think it made you an automatic hire?
Once in a blue moon, for old times' sake, I send a bug fix PR for someone's optimizer, or build a small transpiler for something.
It makes sense now, doesn't it?
The cultural importance of education in Jewry, the preservation of that in Christianity and that Christians can never take any understanding of something for granted and always need to question everything, because the Universe (being created by God) will always be more complex than any current knowledge of it, is the origin of the Western concept of Empirism and formalized (Natural-)Science, even if a lot of modern Atheists like to sweep that under the rack.
A lot of early and also late scientists researched as a part to understand the world their God created, meaning they understood it as a approach to worship God.
Actually, your whole point about LLMs not being able to detect correctness is just demonstrably false if you play around with LLM agents a bit.
First, LLMs should be happy to use made up languages described in a couple thousand tokens without issues (you just have to have a good llm-friendly description, some examples). That and having a compiler it can iterate with / get feedback from.
Second, LLMs heavily reduce ecosystem advantage. Before LLMs, presence of libraries for common use cases to save myself time was one of the main deciding factors for language choice.
Now? The LLM will be happy to implement any utility / api client library I want given the API I want. May even be more thoroughly tested than the average open-source library.
Bloomberg also does use OCaml by the way, although probably not to the extent of Jane Street.
> even those languages haven't taken over much of the professional world.
Non sequitur goalpost moving ... this has nothing to do with whether language development is a dead-end "in the real world", which is a circular argument when we're talking about language development. The claim is simply false.
YMMV, I guess, but you're better off demonstrating experience with what they're hiring for, not random tech that they aren't and never will use.
There's probably hundreds of other brilliant engineers more with insane impacts that never got any popularity.
0: https://www.packtpub.com/en-us/product/llvm-techniques-tips-...
And this is for generic backend stuff, like a CRUD server with a Rest API, the same thing with an Express/Node backend works no trouble.
To be clear, I mean specifically using Claude Code, with preloaded sample context and giving it the ability to call the compiler and iterate on it.
I’m sure one-shot results (like asking Claude via the web UI and verifying after one iteration) will go much worse. But if it has the compiler available and writes tests, shouldn’t be an issue. It’s possible it causes 2-3 more back and forths with the compiler, but that’s an extra couple minutes, tops.
In general, even if working with Go (what I usually do), I will start each Claude Code session with tens of thousands of tokens of context from the code base, so it follows the (somewhat peculiar) existing code style / patterns, and understands what’s where.
(I have no knowledge / context of this situation - no idea if she did or what happened here)
https://docs.google.com/document/d/1pPE6tqReSAXEmzuJM52h219f...
Stories of "asian face" actresses with eyes taped back, prominent pieces of anti asian grafitti on walls and drawn in bathrooms are common tropes in asian communities, etc.
The examples of plagiarism are examples of common story arcs, with an educated asian female twist, and use of examples that multiple writers in a shared literary pool would have all been exposed to; eg: it could be argued that they all drew from a similar well rather thn some were original and others copied.
There's a shocked article: https://www.halfmystic.com/blog/you-are-believed that may indeed be looking at more evidence than was cited in the google docs link above which would explain the shock and the dismissal of R.W. as a plagiarist.
The evidence in the link amounts to what is common with many pools of proto writers though, lots of similar passages, some of which have been copied and morphed from others. It's literally how writers evolve and become better.
I'm on the fence here, to be honest, I looked at what is cited as evidence and I see similar stories from people with similar backgrounds sharing common social media feeds.
So if I copy paste Harry Potter that's ok?
What kind of argument is that
I wonder whether the similar ones were the result of something innocent, like a shared writing prompt within the workshop both writers were in, or maybe from a group exercise of working on each others' drafts.
Or I suppose some could be the result of a questionable practice, of copying passages of someone else's work for "inspiration", and rewriting them. And maybe sometimes not rewriting a passage enough.
(Aside relevance to HN professions: In software development, we are starting to see many people do worse than copy&revise a passage plagiarism. Not even rewriting the text copy&pasted from an LLM, but simply putting our names on it internally, and company copyrights on it publicly. And the LLM is arguably just laundering open source code, albeit often with more obfuscation than a human copier would do.)
But for a lot of the examples of evidence of plagiarism in that document, I didn't immediately see why that passage was suspect. Fiction writing I've seen is heavily full of tropes and even idiomatic turns of phrase.
Also, many stories are formulaic, and readers know that and even seek it out. So the high-powered business woman goes back to her small town origins for the holidays, has second-chance romance with man in a henley shirt, and she decides to stay and open a bakery. Sprinkle with an assortment of standard subgenre trope details, and serve. You might do very original writing within that framework, but to someone who'd only ever seen two examples of that story, and didn't know the subgenre convention, it might look like one writer totally ripped off the other.
Obviously it shouldn't be done in any circumstance
How is telling you that this method of determining correctness is incapable of doing so, only tangential?
That’s pretty damning evidence. If a publisher was on the fence they might pull her books quietly, but they wouldn’t make such a public attack without very good evidence that they thought would hold up in court. There was no equivocation at all.
I just don't see how this could possibly work - how would slapping Harry Potter in the middle of the book your writing work
The evidence, at least the evidence that I found cited as evidence, appears less damning.
Perhaps there is more damning evidence.
What I found was on the order of the degree of cross copying and similar themes, etc. found in many pools of young writers going back through literary history.
Rona Wang, whom I've never previously heard of, clearly used similar passages from her peers in a literary group and was called out for it after receiving awards.
I would raise two questions, A) was this a truly significant degree of actual plagarism, and 2) did any of her peers in this group use passages from any of Tona's work ?
On the third hand, Kate Bush was a remarkable singer / song writer / performer. Almost utterly unique and completely unlike any contempory.
That's ... highly unusual.
The majority of writers, performers, singers, et al. emerge from pools that differ from their prior generations, but pools none the less that are filled with similarity.
The arc of careers of those that rise from such origins is really the defining part of many creators.
It doesn’t prove anything, but it supports the theory that they have seen additional evidence.
After researching this a bit, it looks like someone from publisher says she admitted it to them. That certainly explains why they weren’t afraid to publicly condemn her.
On the gripping hand
Do you consider the announcement from her publisher that she admitted that she plagiarized passages as a damning response or damning evidence?
Read the evidence document another poster linked for actual examples.
Well plagiarism by definition means passing the work off as your own without crediting the author, so in that case it isn’t plagiarism.
References to pop culture are the same as lifting sentences from other books and pretending you wrote them.
> And at some level of famous-ness passages and ideas loose their exclusive tie to the original book and become part of the list of common cultural sayings
In the actual case being examined the copied references certainly hadn’t reached any such level of famousness.
Also there’s a difference between having a character tell another “not all those who wander are lost” as a clear reference to a famous quote from LOTR and copying multiple paragraph length deep cuts to pass off as your own work.
Of course, but wrote 'could' and not 'should' for a reason, I won't expect it. A book isn't a paper and the general expectation is that the book will be interesting or fun to read and not that it is original. That means the general expectation is not that it is never a rehash of existing ideas. I think ever book including all the good ones is. A book that invents the world from scratch might be novel, but unlikely what people want to read.
> copying multiple paragraph length deep cuts to pass off as your own work.
If that is true, it sounds certainly fishy, but that is a case of violation of copyright and intellectual property and not of plagiarism.
There’s a different from rehashing existing ideas and copying multiple passages off as your own.
> If that is true, it sounds certainly fishy, but that is a case of violation of copyright and intellectual property and not of plagiarism.
What exactly do you think plagiarism is? Here’s one common definition:
“An instance of plagiarizing, especially a passage that is taken from the work of one person and reproduced in the work of another without attribution.”
Both are about passing of something of your own. Plagiarism is about passing ideas of insights of as your own. It doesn't really matter, whether you copy it verbatim, present it in your own words or just use the concept. It does however matter how important that idea/concept/topic is in your work and the work you took it from without attribution, and whether that is novel or some generally available/common knowledge.
For violation of intellectual property it is basically the opposite. It doesn't matter, whether the idea or concept is fundamental for your work or the other work you took it from, but it does matter, whether it is a verbatim quote or only the same basic idea.
Intellectual property rights is something that is enforced by the legal system, while plagiarism is an issue of honor, that affects reputation and universities revoke titles for.
> There’s a different from rehashing existing ideas and copying multiple passages off as your own.
Yes and that's the difference between plagiarism and violating intellectual property/copyright.
But all this is arguing about semantics. I don't have the time to research whether the claims are true or not, and I honestly don't care. I have taken from the comments that it was only the case, that she rehashed ideas from other books, and I wanted to point out, that while this is a big deal for academic papers, it is not for books and basically expected. (Publishers might have different ideas, but that is not an issue of plagiarism.) If it is indeed the case that she copied other authors verbatim, then that is something illegal she can be sued for, but whether this is the case is for the legal system to be determined, not something I should do.
In addition to near verbatim quotes, she is also accused of copying stories beat for beat. That's much different than rehashing a few ideas from other works. It is not expected and it is very much considered plagiarism by fiction writers.
As for the quotes she copied. That is likely both a copyright violation and plagiarism.
Plagiarism isn't just about ideas but about expressions of those ideas in the form of words.
Webster's definition:
"to steal and pass off (the ideas or words of another) as one's own : use (another's production) without crediting the source"
"to commit literary theft : present as new and original an idea or product derived from an existing source"
Oxford learner's dictionary:
"to copy another person’s ideas, words or work and pretend that they are your own"
Copying verbatim or nearly verbatim lines from a work of fiction and passing them off as your own is both plagiarism and copyright violation.
> copying stories beat for beat. That's much different than rehashing a few ideas from other works. It is not expected and it is very much considered plagiarism by fiction writers.
Some operas are a greek play. There rehashes of the Faust, the Beggars Opera is a copy of a play from Shakespeare, there are modern versions of Pride and Prejustice, there are tons of stories that are a copy of the Westside Story, which is itself a copy of Romeo and Julia, which I thinks comes from an even older story. This often don't come with any attribution at all, although the listener is sometimes expected to know that the original exists. They change the settings, but the plot is basically the same. Do you consider all of that to be plagiarism? These would be all a reason to call it plagiarism when considering a paper, but for books nobody bats an eye. This is because authors don't sell abstract ideas or a plot, they sell concrete stories.
The stories this authors copied were either unpublished manuscripts she got access to in writers groups or very obscure works that it’s unlikely her readers had read.
Second, the examples you gave were extremely transformative. Just look at the differences between Westside Story and Romeo and Juliette. It’s a musical for goodness sake. It subverts expectations by letting Maria live through it.
The writings at issue are short stories, so there’s less room for transformation in the first place. And there was clearly not even a strong attempt at transformation. The author even kept some of the same character names.
There was no attempt to subvert expectations largely because the audience had expectations, since they weren’t aware of the originals.
>change settings
She didn’t even do that.
> for books nobody bats an eye
If a popular book were revealed to be a beat for beat remake of an obscure novel with the same setting, similar dialogue, some of the same character names, and few significant transformative elements, you can bet your life there would be a scandal.
In August, after many months of job hunting, I finally started a new role in the San Francisco Bay Area as a compiler engineer. It’s wild, I have dental insurance now.
What is a compiler engineer, anyway?
I imagine the audience of this post is both “people who want a job in compilers” and “people who are curious about my life”, so for those in the latter category: Wikipedia says that “a compiler is software that translates computer code written in one programming language into another language.” Basically, I’m a software engineer who works on programming languages. I don’t make programming languages—there is an entire theoretical subfield for that, and it is very cool; I implement them, which requires less math.
If you’re in the latter category, most of the technical details in this post will probably be boring and irrelevant to you, so you can skip to the end where I talk more about why I do compilers, what my life is like these days, etc.
(By the way, my debut novel from Simon & Schuster is out on November 11th. It’s a young adult romance set at a hackathon. You can preorder it here.)
Why this post exists
When preparing for compiler interviews, I discovered there was very little information online about how to break into this niche as a new or recent grad. There were plenty of YouTube videos on machine learning/full-stack/a newfangled thing called “AI Engineer”, but nobody was talking about compilers, and maybe somebody less delusional would’ve taken that as a sign to switch subfields, but alas, I happen to be delusional, so instead I fumbled around in the dark for the better part of a year and eventually landed a job.
So maybe this post can help other people who are interested in compilers. Or maybe they will read all of this and decide this is so not worth the effort, which I guess is also a way of helping them.
A little about me
In 2023, I graduated from MIT with a double major in math and computer science. Then I began a fifth-year research-based master’s degree, where I was in a compilers lab group. I dropped out after that fall, but sadly cannot claim any cool “MIT dropout” cred because it was grad school. From June to October 2024, I was a compiler engineer at a startup in New York City. In that role, I worked on extending a preexisting open-source programming language.
I am now a compiler engineer at a large, post-IPO tech company in the San Francisco Bay Area. I work on making programming languages run faster.
[

here is a photo of me
Who even hires junior-level compiler engineers?
According to Indeed, there are 116,000 job postings for “software engineer” and only 400 job postings for “compiler engineer”. It’s brutal out here.
[

oh
[

oh no
Compiler engineering roles are relatively rare due to limited demand (most companies don’t build their own compilers, and once a compiler is built, the work is mostly maintenance and optimization) and high barrier to entry. In the past year, I interviewed for twenty-ish opportunities.
Here are the different types of places that might hire a compiler engineer:
Startups: In my experience, startups are more amenable to hiring new grads for this position. My first role was at a startup, after all. Startups sometimes post their listings online; I had LinkedIn and Glassdoor alerts for the “compiler” keyword. Getting to unsubscribe from those alerts . . . that was a beautiful moment.
Larger tech companies: Automotive companies (Tesla, Waymo) have compiler positions, as well as hardware companies (Nvidia). While I do know junior compiler engineers at FAANG companies, they got those jobs by converting an internship into a full-time offer; they applied for “software engineering” and got placed on a compiler-specific team due to their skill set and interests.
Academia: Well, I was speaking to a professor who was hiring for his lab, and it would’ve been a cool chance to work on high-performance computing, but the opportunity disappeared after federal funding cuts. Oof.
Quant finance: I did not interview at any of these companies (I wanted to stay in the Bay Area, and these opportunities are predominantly in New York City and Chicago), but some of my classmates ended up at companies like Jane Street or Five Rings due to their interests in high-performance computing and low-level systems.
Open-source projects: I interviewed at one startup that makes an open-source library, but it wasn’t a good fit for my skills.
Okay, but how do you get these people to look at your resume?
If you are reading this because you’re a recent grad or graduating soon, there are many factors that are likely out of your control now, like your educational institution, your previous internship experience, whether you happen to be related to the CEO of a Fortune 500 tech company, etc.
But referrals help a lot! I ask plenty of people for referrals, even if we aren’t that close. I ask my friends if they know anybody at [insert company name] and if that person can refer me. I try to find a referral for every single job I apply to.
I also tell everyone that I like compilers (I even made my Twitter name “Rona likes compilers”, which got me an interview) because if they later hear about someone hiring for that role, they might think of me.
But of course, I am super-lucky in a lot of ways. I am an American citizen and don’t need visa sponsorship. My school name probably opens doors. I can’t pretend otherwise, that it was all because I hustled so hard and got so many referrals!
Later, I’ll talk more about the brutal job market and how one can stand out without a preexisting network.
What are the interviews like?
During my recruiting process, this is the #1 question I wished somebody online could’ve answered. Instead, this is what I found:
[

:(
Yeah, the Internet was not very helpful.
So hello, fellow desperate interviewee who stumbled upon this post while searching “what is on a compiler interview” or “what is on a compiler interview reddit” or “is it too late to change careers?”
Here are the types of interview questions I got:
Leetcode-style data structures and algorithms whiteboarding: I heard that tech recruiting was moving beyond these types of questions, but surprisingly, many still focused on implementing breadth-first search or a priority queue. Unlike typical software engineering interviews, I was expected to solve everything in C++.
Language design principles: One of the most interesting final-round questions I got was to invent a simple programming language with specific constraints. I had to write a grammar, ensuring that it was unambiguous.
Programming languages: I never got anything too deep (nothing about formal verification, for example) but was definitely asked “What is your favorite programming language and why” in multiple interviews. Turns out, if you say brainfuck, you will not get the job. Oops.
Intermediate representation: I had multiple interviews where I was expected to read x86 assembly (or a pseudo version of it) and optimize or translate it into another language. I also had a take-home assignment that was in MLIR.
Optimization passes: The aforementioned MLIR take-home assignment wanted me to write an optimization pass for algebraic simplification; other interviews asked me to write optimization passes for constant propagation and dead code elimination.
Compiler fundamentals: I was asked to explain the different parts of a compiler, different compiler optimizations, static vs. dynamic compilers, etc.
Graph theory: This might’ve been due to my resume (I wrote a graph theory paper in undergrad), but I got some questions about graph-based representations in compilers, like control flow graphs and register allocation.
Other low-level topics: I was asked about deadlock, race conditions, special-purpose registers, instruction pipelines, memory allocation, binary representation and binary arithmetic, garbage collection, probably other things I’m forgetting right now.
Behavioral: The main question lobbed was “why do you want to do compilers” (which maybe you’re also wondering after reading this list); I’ll answer that later.
How do I prepare for all that?
I mostly have my MIT education to thank, which is probably an annoying answer for everybody who didn’t go there, but luckily, OCW has these classes available online for free!
The classes that were most helpful:
Computation Structures, which I also TA’d during my master’s program. Taught me many low-level fundamentals like pipelining, assembly, binary arithmetic.
Dynamic Computer Language Engineering, which does not seem to be available online, but its static counterpart is here. I took this class before I had taken the introductory programming course at MIT and that was like putting on your shoes before your socks, except if your shoes were also on fire. However, it taught me C++, what a compiler was, how to work with a huge codebase, and that mixing Red Bull and coffee while pulling an all-nighter is not the brightest idea.
Performance Engineering taught me the lion’s share of the above list and is the main reason I got my current position. Fun coincidence: for one of the jobs I interviewed for, they ended up going with somebody else—turns out the successful candidate was one of my project partners from this class.
Theory of Computation was not as helpful—it is a math course—but it helped me with that final round where I had to write my own grammar. It’s great for getting you to think about what a programming language needs to achieve on a mathematical level.
I also went through Cornell’s Advanced Compilers course, which was quite fun but perhaps a tad heavy-duty for the purpose of interviewing.
Things I would’ve done differently
I didn’t have a mentor in this space; I probably should’ve done more LinkedIn outreach to find people who had the jobs I wanted.
As part of my prep, I read several books, like Engineering a Compiler by Keith D. Cooper and Linda Torczon and the famous Dragon Book (Compilers: Principles, Techniques, and Tools). They were engaging, but they provided high-level overviews that I already knew, so weren’t that helpful for someone who had already taken the classes mentioned above.
I didn’t write down the interview questions I got, mostly because interviews are like a black hole for me and once they’re over I want to throw away my brain. That’s pretty silly, though. I should’ve recorded the questions I got, so I could go back and review.
I’m not involved in any open-source projects, but they seem like a fantastic way of learning more about this field and also meeting people with shared interests. I did look into Carbon and Mojo but didn’t end up making contributions.
Okay, so why compilers?
Here is the answer I give when an interviewer asks me so why do you want to work on compilers:
I was initially a math major, and I thought I wanted to do a math PhD. I went to an REU (a summer research program for undergrads) and wrote a paper. But I decided academia isn’t for me—I wanted to do something that felt more immediately impactful, so I added computer science as my second major.
When I started studying computer science, I was really attracted to low-level programming because, like math, it felt like reinventing the whole world from first principles. Here are these axioms. You can use them to build the whole universe. In contrast, I didn’t love the empirical nature of machine learning.
That’s my interview-ready answer. But I think I stumbled upon compilers with some serendipity, too. There was the foundational Computation Structures course, where I spent a lot of time going to office hours because my friends happened to be there, and where I spent a lot of time grinding on the final design project (which required low-level optimization) because my friends and I were good-naturedly competing with each other. There was the fact that, at MIT, this niche had a lot of people whom I liked. After all, you have to be a little masochistic to study compilers when other subfields have way more money and prestige.
About that job market
Look, I know the tech job market is brutal right now. I lurk r/csmajors. Also, I grew up in Oregon, and my friends back home (who attended Oregon State or other public, non-target universities) have applied to hundreds of jobs only to get maybe four interviews.
So how the hell does anybody get a job?
This is general advice for non-compilers people, too: Be resourceful and stand out. Get involved in open-source communities, leverage social media, make use of your university resources if you are still in school (even if that means starting a club that nobody attends, at least that demonstrates you’re trying). Meet people. There are reading groups (my friend Eric runs a systems group in NYC; I used to go all the time when it was held in Cambridge). I was seriously considering starting a compilers YouTube channel even though I’m awkward in front of the camera.
I don’t know if this will get you a job. But it’ll certainly improve your chances.
Good luck!
Before the startup I worked at in 2024, I didn’t have any industry experience in compilers. My internships were all full-stack web development. I got pretty lucky that my first full-time job took a chance on me, and even though it didn’t work out with them, I’m really grateful.
During my 2025 recruiting cycle, I applied to a compiler position that paid $28/hr (yes, far above California’s minimum wage, but below market rate for a software engineer) but required C++ experience, knowledge of deep-learning, and LLVM fundamentals. And then, after making it to the final round, I didn’t even get that job. It went to a PhD student.
Also, at one point, I made it to the final round for a position in Shanghai, and I was truly considering moving to China even though my Mandarin ability means I would only be able to fluently converse with five-year-olds.
It took a long time—ten months and so many interviews—but I did end up landing a job as a compiler engineer. Now I spend all day thinking about how to make programs run milliseconds faster. It’s awesome.
Also, if you work in compilers, please say hi! My email is rona at mit dot edu.
(I also recently moved to Palo Alto and don’t know many people here, so if you’re in the area, let’s be friends!)
Okay, shameless plug one last time: my debut novel from Simon & Schuster is out November 11th. It’s titled You Had Me at Hello World. You can get it here.
I’ll end with an excerpt from said book, coding-related to match the theme of this post:
[

by the way did you know you can buy the book here
No posts