You then implement workers in your language of choice and subscribe to queues.
Very interesting though, the article mentioned a few things I hadn't considered before like shared access to one database from multiple (different) apps.
I wonder how database schema state is handled in a case like that. And CI/CD.
I don't see the point of TypeScript either, I can make the LLM output JavaScript and the tokens saved not having to add types can be used to write additional tests...
The aesthetics or safety features of the languages no longer matter IMO. Succinctness, functionality and popularity of the language are now much more important factors.
This is why it's so important to do lots of engineering before writing the first line of code on a project. It helps keep you from choosing a tool set or architecture out of preference and keeps you honest about the capabilities you need and how your system should be organized.
Choosing a single tool that tries to solve every single problem can lead to its own problems.
Why? Because my app is built in Elixir and right now I’m also using a python app that is open source but I really just need a small part of the python app. I don’t wanna rewrite everything in Elixir because while it’s small I expect it to change over time (basically fetching a lot of data sources) and it will be pain to keep rewriting it when data collections needs to change (over a 100 different sources). Right now I run the python app as an api but it’s just so overkill and harder to manage vs just handling everything except the actually data collection in Elixir where I am already using Oban.
I can't say if it works better with other languages, but I can definitely say both Opus and Codex work really well with Elixir. I work on a fairly large application and they consistently produce well structured working code, and are able to review existing code to find issues that are very easy to miss.
The LLM needs guidance around general patterns, e.g. "Let's use a state machine to implement this functionality" but it writes code that uses language idioms, leverages immutability and concurrency, and generally speaking it's much better than any first pass that I would manually do.
I have my ethical concerns, but it would be foolish of me to state that it works poorly - if anything it makes me question my own abilities and focus in comparison (which is a whole different topic).
Not my experience at all. The most important factor is simplicity and clarity. If an LLM can find the pattern, it can replicate that pattern.
Language matters to the extent it encourages/forces clear patterns. Language with more examples, shorter tokens, popularity, etc - doesn't matter at all if the codebase is a mess.
Functional languages like Elixir make it very easy to build highly structured applications. Each fn takes in a thing and returns another. Side effects? What side effects? LLMs can follow this function composition pattern all day long. There's less complexity, objectively.
But take languages that are less disciplined. Throw in arbitrary side effects and hidden control flow and mutable state ... the LLM will fail to find an obviously correct pattern and guess wildly. In practice, this makes logical bugs much more likely. Millions of examples don't help if your codebase is a swamp. And languages without said discipline often end up in a swamp.
No. I would argue that popularity per se is irrelevant: if there are a billion examples of crap code, the LLMs learn crap code. conversely know only 250 documents can poison an LLM independent if model size. [Cite anthropic paper here].
The most important thing is conserve context. Succinctness is not really what you want because most context is burned on thinking and tool calls (I think) and not codegen.
Here is what I think is not important: strong typing, it requires a tool call anyways to fetch the type.
Here is what I think is important:
- fewer footguns - great testing (and great testing examples) - strong language conventions (local indicators for types, argument order conventions, etc) - no weird shit like __init__.py that could do literally anything invisible to the standard code flow
My reading of this is it more or less allows you to use Postgres (which you're likely already using as your DB) for the task orchestration backend. And it comes with a cool UI.
Furthermore, it's actually kind of annoying that the LLMs are not better than us, and still benefit from having code properly typed, well-architected, and split into modules/files. I was lamenting this fact the other day; the only reason we moved away from Assembly and BASIC, using GOTOs in a single huge file was because us humans needed the organization to help us maintain context. Turns out, because of how they're trained, so do the LLMs.
So TypeScript types and tests actually do help a lot, simply because they're deterministic guardrails that the LLM can use to check its work and be steered to producing code that actually works.
At my work we run a fairly large webshop and have a ridiculous number of jobs running at all times. At this point most are running in Sidekiq, but a sizeable portion remain in Resque simply because it does just that, start a process.
Resque workers start by creating a fork, and that becomes the actual worker.
So when you allocate half your available RAM for the job, its all discarded and returned to the OS, which is FANTASTIC.
Sidekiq, and most job queues uses threads which is great, but all RAM allocated to the process stays allocated, and generally unused. Especially if you're using malloc it's especially bad. We used jemalloc for a while which helped since it allocates memory better for multithreaded applications, but easiest is to just create a process.
I don't know how memory intensive ML is, what generally screwed us over was image processing (ImageMagick and its many memory leaks) and... large CSV files. Yeah come to think of it, you made an excellent architectural choice.
https://erlangforums.com/t/hornbeam-wsgi-asgi-server-for-run... https://github.com/benoitc/hornbeam
It's a very different approach than ex_cmd, as it's not really focused on the "streaming data" use case. Mine is a very command/reply oriented approach, though the commands can flow both ways (calling BEAM modules from Python). The assumption is that big data is passed around out of band; I may have to revisit that.
I can't guess. Perl was once the "800-pound gorilla" of web development, but that chapter has long been closed. Python on the other hand has only gained traction since that time.
You may run into some issues with Docker and native deps once you get to production. Don’t forget to cache the bumblebee files.
See: autocodebench
https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/tree/ma...
I think codebases that are strongly typed sometimes have bad habits that "you can get away with" because of the typing and feedback loops, the LLM has learned this.
Specific studies, as the one quoted, are a long way from original real world problems.
LLMs absolutely understand and write good Elixir. I've done complex OTP and distributed work in tandem with Sonnet/Opus and they understand it well and happily keep up. All the Elixir constructs distinct from ruby are well applied: pipes, multiple function clauses, pattern matching, etc.
I can say that anecdotally, CC/Codex are significantly more accurate and faster working with our 250K lines of Elixir than our 25K lines of JS (though not typescript).
(And Elixir's relationship to Ruby is pretty overstated, IMO. There's definitely inspiration, but the OO-FP jump is a makes the differences pretty extreme)
> Elixir's relationship to Ruby is pretty overstated
Perhaps I am actually am over thinking this. Elixir has probably diverged enough from Ruby (e.g. defmodule, pipe operators, :atom syntax) for LLMs to notice the difference between the two. But it does open the question, though, how does an LLM actually recognise the difference in code blocks in its training data.
There are probably many more programming languages where similarities exist.
That surprises me :)
From my time doing Ruby (admittedly a few years back), I found libraries were very well documented and tested. But put into context of then (not now), documentation and testing weren't that popular amongst other programming languages. Ruby was definitely one of the drivers for the general adaption of TDD principles, for example.
I'd layer in a few more
* Largely stable and unchanged language through out its whole existance
* Authorship is largely senior engineers so the code you train on is high quality
* Relatively low number of abstractions in comparisson to other languages. Meaning there's less ways to do one thing.
* Functional Programming style pushes down hidden state, which lowers the complexity when understanding how a slice of a system works, and the likelyhood you introduce a bug
I used to frequently find myself reading the source code of popular libraries or prying into them at runtime. There's also no central place or format for documentation in ruby. Yes rubydoc.info exists, but it's sort of an afterthought. Sidekiq uses a github wiki, Nokogiri has a dedicated site, Rails has a dedicated site, Ruby itself has yet another site. Some use RDoc, some don't. Or look at Devise https://rubydoc.info/github/heartcombo/devise/main/frames, there's simply nothing documented for most of the classes, and good luck finding in the docs where `before_action :authenticate_user!` comes from.
What choices lay before you when your Elixir app needs functionality that only exists, or is more mature, in Python? There are machine learning models, PDF rendering libraries, and audio/video editing tools without an Elixir equivalent (yet). You could piece together some HTTP calls, or bring in a message queue...but there's a simpler path through Oban.
Whether you're enabling disparate teams to collaborate, gradually migrating from one language to another, or leveraging packages that are lacking in one ecosystem, having a mechanism to transparently exchange durable jobs between Elixir and Python opens up new possibilities.
On that tip, let's build a small example to demonstrate how trivial bridging can be. We'll call it "Badge Forge".
"Badge Forge," like "Fire Saga" before it, is a pair of nouns that barely describes what our demo app does. But, it's balanced and why hold back on the whimsy?
More concretely, we're building a micro app that prints conference badges. The actual PDF generation happens through WeasyPrint, a Python library that turns HTML and CSS into print-ready documents. It's mature and easy to use. For the purpose of this demo, we'll pretend that running ChromaticPDF is unpalatable and Typst isn't available.
There's no web framework involved, just command-line output and job processing. Don't fret, we'll bring in some visualization later.
Some say you're cra-zay for sharing a database between applications. We say you're already willing to share a message queue, and now the database is your task broker, so why not? It's happening.
Oban for Python was designed for interop with Elixir from the beginning. Both libraries read and write to the same oban_jobs table, with job args stored as JSON, so they're fully language-agnostic. When an Elixir app enqueues a job destined for a Python worker (or vice versa), it simply writes a row. The receiving side picks it up based on the queue name, processes it, and updates the status. That's the whole mechanism:

Each side maintains its own cluster leadership, so an Elixir node and a Python process won't compete for leader responsibilities. They coordinate through the jobs table, but take care of business independently.
Both sides can also exchange PubSub notifications through Postgres for real-time coordination. The importance of that tidbit will become clear soon enough.
This is more of a demonstration than a tutorial. We don't expect you to build along, but we hope you'll see how little code it takes to form a bridge.
With a wee config in place and both apps pointing at the same database, we can start generating badges.
Generation starts on the Elixir side. This function enqueues a batch of (fake) jobs destined for the Python worker:
def enqueue_batch(count \\ 100) do
generate = fn _ ->
args = %{
id: Ecto.UUID.generate(),
name: fake_name(),
company: fake_company(),
type: Enum.random(~w(attendee speaker sponsor organizer))
}
Oban.Job.new(args, worker: "badge_forge.generator.GenerateBadge", queue: :badges)
end
1..count
|> Enum.map(generate)
|> Oban.insert_all()
end
Notice the worker name is a string, "badge_forge.generator.GenerateBadge", matching the Python worker's fully qualified name. The job lands in the badges queue, where a Python worker is listening.
The Python worker receives badge requests and generates PDFs using WeasyPrint:
from oban import Job, Oban, worker
from weasyprint import HTML
@worker(max_attempts=5, queue="badges")
class GenerateBadge:
async def process(self, job: Job) -> None:
id = job.args["badge_id"]
name = job.args["name"]
html = render_badge_html(name, job.args["company"], job.args["type"])
path = BADGES_DIR / f"{name}.pdf"
# Generate the pdf content
HTML(string=html).write_pdf(path)
# Construct a job manually
job = Job(
args={"id": id, "name": name, "path": str(path)},
queue="printing",
worker="BadgeForge.PrintCenter",
)
# Use the active Oban instance and enqueue the job
await Oban.get_instance().enqueue(job)
When a job arrives, it pulls the attendee info from the args, renders an HTML template, and writes the PDF to disk. After completion, it enqueues a confirmation job back to Elixir.
The Elixir side listens for confirmations and prints the result:
defmodule BadgeForge.PrintCenter do
use Oban.Worker, queue: :printing
require Logger
@impl Oban.Worker
def perform(%Job{args: %{"id" => id, "name" => name, "path" => path}}) do
Logger.info("Printing badge #{id} for #{name}: #{path}...")
do_actual_printing_here(...)
:ok
end
end
With that, there's two-way communication through the jobs table.
To print conference badges you need a conference. You should have a conference. We're printing badges for the fictional "Oban Conf" being held this year in Edinburgh. It will be both hydrating and engaging. Kicking off a batch of ten jobs from Elixir:
iex> BadgeForge.enqueue_batch(10)
:ok
On the Python side, we see automatic logging for each job with output like this (output has been prettified):
[INFO] oban: {
"id":14,
"worker":"badge_forge.generator.GenerateBadge",
"queue":"badges",
"attempt":1,
"max_attempts":20,
"args":{
"id":"7bfb7c39-c354-4cce-ad5b-f1be2814b17e",
"name":"Alasdair Fraser",
"type":"speaker",
"company":"Wavelength Tech"
},
"meta":{},
"tags":[],
"event":"oban.job.stop",
"state":"completed",
"duration":2.51,
"queue_time":5.45
}
The job completed successfully, and back in the Elixir app, we see that the print completed:
[info] Printing badge 7bfb7c39 for Alasdair Fraser: /some/path...
The output looks something like this:

Apologies to any "Alasdair Frasers" out there, your name was pulled from the nether and there isn't a real conference. As consolation, if you contact us, you have stickers coming.
Seeing jobs in terminal logs is fine, but watching them flow through a dashboard is far more satisfying. We recently shipped a standalone Oban Web Docker image for situations like this; where you want monitoring without mounting it in your app. It's also useful when your primary app is actually Python...
With docker running, point the DATABASE_URL at your Oban-ified database and pull the image:
docker run -d \
-e DATABASE_URL="postgres://user:pass@host.docker.internal:5432/badge_forge_dev" \
-p 4000:4000 \
ghcr.io/oban-bg/oban-dash
That starts Oban Web running in the background to monitor jobs from all connected Oban instances. Queue activity and metrics are exchanged via PubSub, so the Web instance can store them for visualization. Trigger a few (hundred) jobs, navigate to the dashboard on localhost:4000, and look at 'em roll:
Badge Forge is whimsical, some say "useless", but the pattern is practical! When you need tools that are stronger in one ecosystem, you can bridge it. This goes both ways. A Python app can reach for Elixir's strengths just as easily.
Check out the full demo code for the boilerplate and config we rested over here.
As usual, if you have any questions or comments, ask in the Elixir Forum. For future announcements and insight into what we're working on next, subscribe to our newsletter.