The UI toolkits in game engine usually suck hard, so here they started from a good UI toolkit and made it possible to make relatively performant games.
There's more info at https://www.reddit.com/r/programming/comments/1r0lx9g/fluori...
There was a passing comment about "when we open up the GitHub repository" in the talk. So it's not open yet, but they've suggested it might be in the future.
Toyota assuming they move forward with this, might even become the main corporate sponsor since Google appears to be disinterested.
That said, this is cool and I would have probably celebrated a similarly fun project in their shoes. Perhaps the real accomplishment here is getting Toyota to employ you to build a new, niche game engine.
I recently (as in, last night) added WebSockets to my backend, push notifications to my frontend iOS, and notification banner to the webapp. It all kinda just works. Biggest issues have been version-matching across with Django/Gunicorn/Amazon Linux images.
They have a fun demo of a 3D shooter in it.
Funny how “game engines” are now car parts in 2026.
Can I just have an electric car that’s a car and nothing else? Seats, wheels pedals, mirrors, real buttons, no displays just a aux jack. I’d buy it, hell I might even take the risk and pre-order it
Makes me wonder if you might eventually see the OG Flutter team move to a shop like Toyota, the same way the original React team moved to Vercel. It's nice to see open source projects be portable beyond the companies that instigated them.
Aside: GL is still a good practical choice for games built by small teams.
They already tried other engines, such as Unity. The team didn't just go off and build something without trying existing solutions first.
https://github.com/google/filament
but if they're targeting embedded systems, maybe they haven't prioritized a public web demo yet. If the bulk of the project is actually in C++, making a web demo probably involves a whole WASM side-quest. I suspect there's a different amount of friction between "I wanna open source this cool project we're doing" and "I wanna build a rendering target we won't use to make the README look better."
One of the example uses given in the talk is 3D tutorials, which I could imagine being handy. Not sure I'd want to click on the car parts for it but with the correct affordances I could imagine a potentially useful interface.
In the US, no. Backup cameras are required by federal law as of 2018. The intent of the law was to reduce the number of children killed by being backed over because the driver couldn't see them behind the car.
It's like, at least one exists in Japan, on used market even, if you absolutely have to have one, I guess
0: https://www.honda.co.jp/N-ONE-e/webcatalog/design/images/e_g...
1: https://driver-web.jp/articles/gallery/41396/36291
2: https://www.carsensor.net/usedcar/detail/AU6687733258/index.... | https://archive.is/gbBzc
Seems almost inevitable. Game engines end up supporting user interface elements and text with translations, but with an emphasis on simplicity, performance, and robustness. Many currently trending user interface stacks readily generate bursts of complexity, have poor performance even with simple usage, and are buggy and crash prone.
Backup cameras are required for new vehicles in a lot of markets: EU, Canada, Japan, and more.
So it's not just a US requirement.
It doesn't need to be a giant infotainment display.
Wish they would do that for all the trucks with 5ft high hoods with no cameras.
The problem with modern cars is that everything is so heavily integrated and proprietary. If I swapped out the OEM touchscreen, apparently I would also lose the ability to set the clock on my instrument cluster. Now that this has become normalized, automakers have realized they can lock Android Auto/CarPlay behind a paywall and you’ll have no recourse but to buy one of those tablets that you stick on your dashboard and plug into the aux port. If your car still has an aux port.
I’m excited for the Slate, but unfortunately I have the feeling that the people who buy new cars aren’t the same people that want the Slate. The rest of us who keep our 20+ year old vehicles reliably plugging along don’t make any money for automakers.
I've only ever joined teams with large, old codebases where most of the code was written by people who haven't been at the company in years, and my coworkers commit giant changes that would take me awhile to understand so genAI feels pretty standard to me.
I loved the Viper, but its spartan interior and features list were its detriment.
I don't know how bloated Godot is, but AFAIK libgodot development started as part of Migeran's automotive AR HUD prototype so I'm surprised to hear it has poor startup time for a car.
"V8"
"Which kind of V8?"
The problem is unless your ready to waste hours prompting to get something exactly how you want it, instead of spending those few minutes doing it yourself, you start to get complacent for whatever the LLM generated for you.
IMO it feels like being a geriatric handicap, there's literally nothing you can do because of the hundreds or thousands of lines of code that's been generated already, you run into the sunk cost fallacy really fast. No matter what people say about building "hundreds of versions" you're spending time doing so much shit either prompting or spec writing that it might not feel worth getting things exactly right in case it makes you start all over again.
It's literally not as if with the LLM things are literally instantaneous, it takes upwards or 20-30 minutes to "Ralph" through all of your requirements and build.
If you start some of it yourself first and you have an idea about where things are supposed to go it really helps you in your thinking process too, just letting it vibe fully in an empty directory leads to eventual sadness.
I've tried fixing some code manually and then reused an agent but it removed my fix.
Once you vibe code, you don't look at the code.
To have a decent travel experience in an EV you'd likely at least need this data ported out to your phone via an OBD adapter or CarPlay / Android Auto integration with an in-car infotainment display.
Bevy is the opposite of an old boring solution. It's a cool engine, but I imagine a manufacturer would like to have long-term support with 15-year timelines. Bevy doesn't offer that, and even trying to have that wouldn't be good for Bevy.
- fancy HDR rendering with reflection planes,atmospheric effects, tone mapping, camera effects, all kinds of animations for doors opening, lights turning on off etc
- content pipelines to get all this data from digital creation tools into packages deployable on target
When everything is said and done this is the same bread and butter what game engines use so the industry has pushed to leverage those and spread to these markers. Both unity and epic have tried with but not without issues.The perk of not having to twist your body around while steerins is also pretty nice.
Was a great example of the ridiculous expectations some of us Americans have on ridiculously huge vehicles.
Fluorite is the first console-grade game engine fully integrated with Flutter.
Its reduced complexity by allowing you to write your game code directly in Dart, and using all of its great developer tools. By using a FluoriteView widget you can add multiple simultaneous views of your 3D scene, as well as share state between game Entities and UI widgets - the Flutter way!

At the heart of Fluorite lies a data-oriented ECS (Entity-Component-System) architecture. It's written in C++ to allow for maximum performance and targeted optimizations, yielding great performance on lower-end/embedded hardware. At the same time, it allows you to write game code using familiar high-level game APIs in Dart, making most of your game development knowledge transferrable from other engines.
Your browser does not support the video tag.
This feature enables 3D Artists to define “clickable” zones directly in Blender, and to configure them to trigger specific events! Developers can then listen to onClick events with the specified tags to trigger all sorts of interactions! This simplifies the process of creating spatial 3D UI, enabling users to engage with objects and controls in a more intuitive way.
Your browser does not support the video tag.
Powered by Google's Filament renderer, Fluorite leverages modern graphics APIs such as Vulkan to deliver stunning, hardware-accelerated visuals comparable to those found on gaming consoles. With support for physically-accurate lighting and assets, post-processing effects, and custom shaders, the developers can create visually rich and captivating environments.
Your browser does not support the video tag.
Thanks to its Flutter/Dart integration, Fluorite's scenes are enabled for Hot Reload! This allows developers to update their scenes and see the changes within just a couple frames. This significantly speeds up the development process, enabling rapid iteration and testing of game mechanics, assets, and code.

For me, GUI and Web code falls into "throwaway". I'm trying to do something else and the GUI code development is mostly in my way. GUI (especially phone) and Web programming knowledge has a half-life measured in months and, since I don't track them, my knowledge is always out-of-date. Any GUI framework is likely to have a paroxysm and completely rewrite itself in between points when I look at it, and an LLM will almost always beat me at that conversion. Generating my GUI by creating an English description and letting an AI convert that to "GUI Flavour of the Day(tm)" is my least friction path.
This should be completely unsurprising to everybody. GUI programming is such a pain in the ass that we have collectively adopted things like Electron and TUIs. The fact that most programmers hate GUI programming and will embrace anything to avoid that unwelcome task is pretty obvious application of AI.
The way I use LLM's is that I design main data structures, function interfaces etc. and ask LLM's to fill them. Also test cases and assertions.
To the degree that those same people are now writing 10-100x more code...that is scary, but the doom and gloom is pretty tiring.
Truly one of the statements of all time. I hope you look at the code, even frontier agents make serious lapses in "judgement".
And therein lies the problem
Any engineer worth their weight will always try to avoid adding code. Any amount of code you add to a system, whether is written by you or a all knowing AI is a liability. If you spent a majority of your work day writing code it's understandable to want to rely heavily on LLMs.
Where I'd like for people to draw a line on is not knowing at all what the X thousand lines of code are doing.
In my career, I have never been in a situation where my problems could be a solved by piecing together code from SO. When I say "spend those few minutes doing it yourself" I am specifically talking about UI, but it does apply to other situations too.
For instance, if you had to change your UI layout to something specific. You could try to collect screenshots and articulate what you need to see changed. If you weren't clear enough that cycle of prompting with the AI would waste your time, you could've just made the change yourself.
There are many instances where the latter option is going to be faster and more accurate. This would only be possible if you had some idea of your code base.
When you've let an agent take full control of your codebase you will have to sink time into understanding it. Since clearly everyone is too busy for that you get stuck in a loop, the only way to make those "last 10%" changes is *only* via the agent.
You can already see people running into these issues, they have a spec in mind. They work on the spec with and LLM, the spec has stuff added to it that wasn't what they were expecting.
And again, I am not against LLMs but I can be against how they're being used. You write some stuff down, maybe have the LLM scaffold some skeleton for you. You could even discuss with the LLM what classes should be named what should they do etc. just be in the process so the entire code base isn't 200% foreign to you by the time it's done.
Also I am no one's mother, everyone has freewill they can do whatever they'd like. If you don't think you have better things to do than to produce 3-5 pieces of abandonware software every weekend then good for you.
There was the chip shortage during covid which held car production back becasue the auto makers couldnt source their chips fast enough. I am waiting to see if the current supply issue for ram chips modules will produce a similar effect.
Every single car I have been in in the last 5 years or so has Bluetooth. No need for aux ports in this day and age, especially when devices dont have headphone jacks anymore.
Are you stuck in the 2000's?
Blaming trucks and SUVs for everything is a favorite pasttime of internet comments, but all vehicles benefit from backup cameras and collision detection sensors.
It's sad to think we may be going backwards and introducing more black boxes, our own apps.
To the AI optimist, the idea of reading code line by line will see as antiquated as perusing CPU registers line by line. Something do when needed, but typically can just trust your tooling to do the right thing.
I wouldn’t say I am in that camp, but that’s one thought on the matter. That natural language becomes “the code” and the actual code becomes “machine language”.
Crazy to think had the federal subsidy not been cut, that car would be possible to get for around $15k. Unheard of.
I've worked places where junior made bad code that was accepted because the QA tests were ok.
I even had a situation in production where we had memory leaks because nobody tried to use it for more than 20 minutes when we knew that the app is used 24/7.
We aim for 99% quality when no-one wants it. No-one wants to pay for it.
Github is down to one 9 and I haven't heard them losing many clients, people just cope.
We've reached a level where we have so much ram that we find garbage collection and immutability normal, even desired.
We are wasting bandwidth by using json instead of binary because it's easier to read when have to debug, because it's easier to debug while running than to think before coding.
Ol' Dirty Bastard? I jest, but I think the theory behind wanting an 'On-board Diagnostics' [1] connection would be to get data from the vehicle. You can get cheap bluetooth OBD-II adapters to transmit that info to your phone, it's not a given. I don't know much about electric cars, but if you want your phone to know the fuel level in an ICE vehicle then you'd need this kind of connection.
Was there a single mass market consumer car sold in the United States in this millennium that didn’t already have processors and RAM in them?
I would be absolutely shocked if there was a single car for which the relatively recent backup camera requirement required them to introduce processors and RAM for the first time.
I have unusually good spatial skills. I have parallel parked and reverse parked perfectly every single time for over 5 years…
…but no matter what, I cannot see behind my bumper. No mirror on any car points there.
Call me old fashioned but in my opinion, processors/ram/chips/components are a good trade-off versus squished children
Bluetoothing to your car is to me the same energy as using "wireless" charging stands for your phone. You are just replacing a physical tether with a less efficient digital tether of higher complexity for no actual gains.
Offloading your thinking, typing all the garbled thoughts in your head with respect to a problem in a prompt and getting a coherent, tailored solution in almost an instant. A superpowered crutch that helps you coast through tiring work.
That crutch soon transforms into dependence and before you know it you start saying things like "Once you vibe code, you don't look at the code".
I have a 2016 vehicle with no console screen and they have saved me from hitting all sorts on things, and are sensitive enough to detect minor obstacles like long grass.
Power windows are standard. 169hp. Automatic climate control, central locking and key fobs, Automatic emergency braking and other radar based features. Digital gauge cluster. Modern infotainment. Modern crash safety, which is really good compared to 20 years ago.
That's a lot of car for $10k in 1996 dollars.
That's ignoring the $3k in fees, taxes, and whatever scam the dealer runs.
The reason we don't see more of it is that selling one $23k Corolla to one value minded shopper can't make line go up as much as selling one $60k MEGATRUCK to one easily influenced shopper. The new car market is exclusively for people who buy new cars regularly, and are therefore willing to get very bad deals for cars. The market is driven by people who self select for bad ability to parse value.
It's so silly when they make some "Advanced Technology Package" with a VGA camera and a 2-inches-bigger infotainment screen that's still worse than junk from Aliexpress, and charge $3000 extra for it.
I know it's just a profit-maximizing market segmentation, but I like to imagine their Nokia-loving CEO has just seen an iPad for the first time.
You shouldn’t need any dedicated RAM. A decent microcontroller should be able to handle transcoding the output from the camera to the display and provide infotainment software that talks to the CANbus or Ethernet.
And the bare minimum is probably just a camera and a display.
Even buffering a full HD frame would only require a few megabytes.
Pretty sure the law doesn’t require an electron app running a VLM (yet) that would justify anything approaching gigabytes of RAM.
https://www.cdc.gov/mmwr/preview/mmwrhtml/mm5406a2.htm
I suspect older children are more likely to be able to be aware of their surroundings and have better gross motor skills to react.
The US was ahead of the EU in requiring backup cameras on new vehicles.
The majority of pedestrian accidents aren't involved with backup cameras.
Are you just trying to turn this into a US vs EU argument?
Americans drive significantly more miles per year, and larger/more comfortable cars are in part needed because Americans spend far more time in their cars than Europeans.
Euro governments are also increasingly anti-car, which means citizens are loosing their freedom to travel as they wish and unreasonably taxed, policed, and treated like cash cows for the "privilege" of driving.
So pedestrian deaths would start rising again.
They might as well be complaining about the costs of a rear view mirror, it is nonsense from the start. If a $20 gadget breaks the bank on a $30,000 minimum vehicle, they are a shitty business to start with and we should all be clapping our hands when they go out of business.
So what microcontroller do you have in mind that can run a 1-2 megapixel screen on internal memory? I would have guessed that a separate ram chip would be cheaper.
There has been real price decrease in small cars!
Most of my European friends brag about how they can get anywhere via train and how much more comfortable it is to travel that way. When I visit Europe I have to agree. Just haven't really seen this viewpoint, though I do think I would feel this way as an American if I moved to Europe to some extent (though I'd be extremely happy to have viable mass transit).
how much the conversation diverts on a commentary about someone not wanting a car shipped with an OS capturing telemetry even of farts on the right back seat