EDIT> dont ask how I came up with this quote
Seedance 2.0, Kling 3 are regarded the best closed source video models we have. I have subscribed to a few AI video subreddits, consensus atm is they are good for anything but long form videos with humans.
No surprises that we're very good at spotting even the most subtle differences while looking at other people.
Also, will this run on RTX 4090 with 24GB memory?
Thank you!
It's honestly impressive, on the surface. The visuals are gorgeous... but it's still empty. What makes a "World" a world is precisely it's coherency. It's not about how it looks but rather how it "works". The plants in an ecosystems are a certain way because of the available resources, all the way to forces like gravity. It doesn't just "look" like that. To echo Konrad Lorenz a fish doesn't just swim in the water, rather the fish IS an efficient representation of the water it lives within. Here in such "worlds" there is nothing happening. There is minimal superficial coherence, no logic, nothing.
The ultimate liminal spaces.
Everyone is right to be skeptical of this coming from a 2.8B model. Weights or it didn't happen.
I can't say I'm looking forward to an AI video future.
There’s no doubt they’re technically impressive, but what does one do with it?
I'm not a game developer myself, but some of my favorite games carry a deep sense of intentionality. For instance, there is typically not a single item misplaced in a FromSoftware game (or, for instance, Lies of P -- more recently). Almost every object is placed intentionally.
Games which lack this intentionality often feel dead in contrast. You run into experiences which break immersion, or pull you out of the experience that the developer is trying to convey to you.
It's difficult for me to imagine world models getting to a place where this sort of intentionality is captured. The best frontier LLMs fail to do this in writing (all the time), and even in code, and the surface of experiences for those mediums often feel "smaller" than the user interaction profile of a video game.
It's not clear how these world models could be used modularly by humans hoping to develop intentional experiences? I don't know much about their usage (LLMs are somewhat modular: they can produce text, humans can work on it, other LLMs can work on it). Is the same true for the video output here?
All this to say, I'm impressed with these world models, but similar to LLMs with writing, it's not really clear what it is that we are building towards? We are able to create less satisfying, less humane experiences faster? Perhaps the most immediate benefit is the ability for robotic systems to simulate actions (by conjuring a world, and imagining the implications).
In general, I have the feeling that we are hurtling towards a world with less intentionality behind all the things we experience. Everything becomes impersonal, more noisy, etc.
I've been doing some content with people at https://industrialallusions.com
In this case, what looks interesting is the one minute coherence and the massive speedup - they claim 36x over open models with similar capabilities. You can tell they aren’t aiming for state of the art visuals — looks very SD 1.5 in terms of the output quality.
The 'Refiner' effect seems to do the opposite if the examples are representative as in all cases the 1-st stage images look better than the 'refined' ones. Less clutter, more realistic, less 'cowbell' for those who know the phrase.
It is inevitable that learned simulators will replace hand-coded simulators, as it is a straightforward application of the Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
By enabling general purpose robotics, world models will be one of the most useful inventions of all time. For examples of what I'm talking about in current research, check:
Dreamer 4: https://danijar.com/project/dreamer4/
DreamDojo: https://arxiv.org/abs/2602.06949
Tesla's world model: https://www.youtube.com/watch?v=LFh9GAzHg1c
Waymo's world model: https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-f...
I'm curious if a younger me would have adapted much faster.
This one is probably too small to be useful for that, and not diverse enough? But I could be wrong.
> A dedicated 17B long-video refiner sharpens texture, motion, and late-window quality on top of the long-rollout backbone.
It’s been my belief for several years that this is how the future of games will be constructed. Data in the background, game engine for rules application/ physics execution/orchestration/maybe low-poly rendering, an AI world model taking low resolution input in generating customized visuals/effects/textures/everything, even camera location, but still constrained by concrete rules in the game engine.
I’m certain one day it might all be handled by AI, but the above seems much more realistic and achievable that expecting AI to do all of these things, at one, correctly, every frame.
However, there are a few promising markets, assuming WMs continue to get better and cheaper:
1. Robotics training / evaluation: modern end-to-end (sensors-to-control) robot policies require simulators that are almost indistinguishable from reality. If your sim is distinguishable from reality, the evaluation metrics you get from sim don't mean anything and the policies you train in sim don't work. World models will likely be the highest-fidelity robotics simulators, since WMs are data-driven and get arbitrarily more-realistic given more data/compute. This is why so many robotics companies have WM projects [1] [2] [3] [4].
2. Video frontends for agents: in the same way that today's frontier labs are building realtime voice interfaces [5] which behave like a phone call, realtime video interfaces will behave like a video call. Early forms of this don't feel compelling IMO [6] [7], but once the models can instantly blend between rendering the agent itself, drawing diagrams/visualizations, rendering video, etc. I can see it surpassing pure voice mode.
3. Entertainment: zero-shot world generation (i.e. holodeck, genie 3; paste in an image/video/text prompt and get a world) will be a fun toy but I'm not convinced it has any long-term value. I'm more optimistic about proper narrative experiences where each scene/level is a small, carefully-crafted world (behaving like a normal film scene if you don't touch the controls, and an uncharted/TLoU-style narrative game if you do), such that the sequence of scenes builds up a larger story.
[1] https://wayve.ai/thinking/gaia-3/
[2] https://xcancel.com/Tesla/status/1982255564974641628 / https://xcancel.com/ProfKuang/status/1996642397204394179
[3] https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-f...
[4] https://www.1x.tech/discover/world-model-self-learning
[5] https://thinkingmachines.ai/blog/interaction-models/
[6] https://runwayml.com/news/introducing-runway-characters
[7] https://blog.character.ai/character-ais-real-time-video-brea...
That world-state can be anything, but in the last year or two, the term has taken a narrower meaning: a video generation model that reacts naturally to game-like controls, as if it was simulating a videogame. But there's no additional state behind the video frames.
I hope nobody leaves that page open on a metered or capped network connection.
I'm surprised github hasn't suspended the page.
Are AI researchers so used to burning through compute and network resources that they don't stop to think about a webpage that will autoplay and loop multiple HD videos?
Making a world internally consistent by explicit placement gets harder as you increase in scale. When internal consistency is a factor impacting quality, there is a scale at which generated content eventually becomes the higher quality solution.
Secondly, when generating content with AI, the same rules around carelessness apply. There are certainly generative AI tools out there that offer few options when it comes to composing what you want, that is not a necessary part of AI, some of it is because people are wanting rudimentary interfaces, some of it is that the generators are sufficiently new that the control mechanisms are limited because they are focused upon doing something at all before doing it highly controlled, in some ways the problem is that things are new enough that it can be hard to describe what is desirable controllability, making the generator to see what people would like it to be able to do is, I think, a reasonable path to follow prior to creating the control that people want. Part of it is also that there _are_ tools that give a high level of control over what is generated but far fewer people get to see them. There are ways to control styles, object placement, camera motions, scene compositions, etc. The more specialised you get, the smaller the subset of people who need that specific control.
I think AI can make things possible for people who could not have done so without them, but it's still going to take care to make something special.
Yes, exactly. Inundate the world with superficially plausible yet hollow content, including any desired themes. People who aren't very discerning won't complain; the others will be outmatched and find that 99/100 pieces are all noise and they will need to spend increasing amounts of time trying to find the 1, if they can.
I think there are some good parallels with Amazon: the broken sorting and manipulated unit pricing, coupled with the avalanche of cheap clones pushes users to give up and just buy one of the top listed products (a featured listing/Amazon-clone). If you do a web search for various products and go to images, Amazon product links often take up 50-90% of the results.
Put another way, the average game quality will go down, but the actual rate of "Great" games will go up.
And those same people forget that its been 3 years since that awful will smith spaghetti video to what we have today which is the beginning of controllable real time videos aka games
It appears that there are 62 videos on the page. They're generally 16fps and 60s long. All are h.264, 1280x704. The median bitrate is 4.962 Mbps.
I don't know enough about JS to try to understand WTF it is doing, but there's only 1.3 GB of video on that page. At a transfer speed of 400Mbps, the whole mess of them should be downloadable in around 30 seconds.
But it wasn't behaving that way at all. It instead behaved as an excellent bandwidth-waster.
(Woe to those who click this link on metered connection, I guess.)
Empathizing about problems you don't face is a hard product/ux and management skill. Facebook famously simulated 2G on Tuesdays 10 years ago[1] for example to get their employees to see the problems their users have.[2]
People don't to put effort in noticing(solving comes next) problems they don't face. It is why things like a11y and i18n need regulation like ADA etc.
[1] https://engineering.fb.com/2015/10/27/networking-traffic/bui...
[2]While it would be hard to attribute directly, GraphQL and to an extent React probably was influenced by these kind of things
I’m sure they’ll give their Claude instance a stern talking-to.
Imagine playing Read Dead Redemption 2 and you attempt to ride your horse from Saint Denis to Valentine and Valentine no longer exists, or is a completely different town located half a mile off from where it was originally.
I just don't see how this would work...
It seems to, even.
Whereas if you hand a router to someone with a flush trim but in it and ask them to clean up the edge of a table they will take one look at it and nope away from that dangerous spinning thing.
If they have the mind to give it a shot and despite a quality tool and bit they bite into the table and ruin the line (or something much worse) no one will be surprised—-they have no experience or recognition of what expertise is in woodworking.
But with AI, it is much more hazy what expertise is.
The methodology for quality results is changing each week and the articulation in personal tooling involved makes it challenging to adopt another “expert”’s workflows.
But the dopamine descent require strong discipline to stop there, and most don't .
Why do I need more slopware? I have an entire Steam library of excellent games that deserve to be played first.
So, what's the deal?
As with ANY work in life, the quality of the result is a direct reflection of care and intention behind it. Simplified, it's a reflection of how much _you_ put effort in it. It always shows. Even in AI day and age. It's just that path to a result (without effort) is now way shorter so volume is showing up and diluting the overall impression. The latter kind of cheapens every field it touches, so even more effort will have to be put in to show up on the radar.
They can be very powerful tools but creating meaningful art/subjective work that is actually good is actually a borderline impossible task for LLMs since it requires genuine creativity.
Similarly, I am seeing coding agents be infinitely more useful to people who already code than non-technical people thinking they can vibe code their own Salesforce.
Its just that AI doesn't work well as a "productivity booster" tool in the marketing context. So they are pushing towards the idea that anyone can do anything with AI which is imo a really stupid hill to die on or base your ipo on.
https://www.reddit.com/r/HiggsfieldAI/
Higgsfield have multiple models available, people use Kling usually 2.5 & 3. There are a few good examples posted right now you'll notice the subtle differences.
I have tried to generate things myself and it's extremely hard to have more than 7-8 clips that are consistent, eventually you'll accept a compromise. I think it's why there isn't any long form content being done yet. Getting good results is sometimes just "chance" regardless of how many reference data you have.
That's a pretty specific and one-sided example. There are tons of good games that don't rely on elaborate item placement (e.g. many Bethesda games are great because most items are useless decorations, they broke that rule in recent games, giving the purpose to clutter, and it made them a lot worse). There are tons of good games not relying on this intentionality at all, they're either literally random cool ideas thrown at the wall, or even procedurally generated.
Many of the most popular games in the past decade are procedurally generated and have nothing “intentionally” placed (apart from tuning/tweaking the balance of the seeding algorithms).
One aspect of intentionality is that there’ll be a narrative payoff when you find something you find interesting. In videogames, the world is mostly pre-designed, so the designer has to predict what you’ll be interested in for the most part (In pen and paper RPGs, this is usually done better, because the human dungeon master/DM can plan ahead, but also improvise a payoff or modify the plot between sessions). If there was a world model generated game world, I guess the model would have to be “smart” though to setup and execute those payoffs.
An advantage that the world model would have (and shares with a good human DM) is that everything is an interactable, and the players get to pick what they think is interesting. If everything is improv with a loose skeleton around it, you don’t have to predict as far out. I think world model generated games, if they even become a thing, will be quite a bit worse than conventionally designed ones for a long time (improv can be quite shallow!) but have a lot of potential if they work out.
FromSoft is an interesting example. They make the game more believable by having extremely missable quests, just, most of them don’t block progress through the game, and you usually stumble across enough side quests naturally (although IMO the density was too low in Elden Ring, their system showed a bit of weakness in the less-guided context). The plot is pretty vague, but the vibes tell enough of a story that you don’t really mind. It’s sort of improv/pen-and-paper but the player’s imagination is doing the job of the DM.
The other is creating multi-modal models with a better understanding of our world. LLMs often fail at incredibly basic spatial reasoning ("someone left a package in front of your apartment, describe going there", or the "should I drive to the car wash or go there", etc). World models excel at these kinds of things (in theory). They develop a great understanding of physical spaces, object interactions, etc. They can simulate fluids, rigid body physics etc. You "just" have to get really good at making world models, then somehow marry them with an LLM in a way that ensures the LLM can benefit from the world model's training data. Nobody has managed to really do that yet
So lots of hopes for the future. Until then they get commercialized as video models, or ways to experience your favorite forest, or to have a really bad video game ... whatever can be sold on a short time horizon to finance the actual goals
You could also use these models to generate assets for a game during development whether that's simple cutscenes or assets produced through gaussian splatting or some other process.
If these models and others can be run cost effectively on a cloud service or even locally at some point then you could do some interesting things by combining them with 3D mesh generation, img2img, vid2vid, etc. just think about even simple games like Papers Please and the whole genre it spawned that uses short episodes where you have to make a guess based on what you see, there's a lot of potential for creating new mechanics around generative imagery.
Remember video generation? 3 years ago the will smith spaghetti video came out.
You see how this trend will only continue? Game development is going to get really weird.
I take raw material and make something out of it with a circular saw, largely unrestrained by anything other than cost, skill, and material.
With a microwave, I make things hot so I can eat them.
Aside: Also, I wonder why that is? Why do we regard the microwave as "degenerate" compared to the oven? Why is baking seen as a calling while microwaving is, well, not? Is it the ease of the microwave makes the effort less impressive? Maybe it is that you can't achieve certain effects like browning? Is it because of it's 1970's association with "radiation" and tv dinners? Is it just cultural inertia?
I think you underestimate the intentionality that goes into developing procedural generation. Something like Dwarf Fortress isn't "place objects randomly" - it is layers upon layers of carefully crafted systems that build upon each other to produce specific patterns of outcome
Are video game developers using these systems in their workflows? Would love to learn more!
Fromsoft is perfectly happy for you to miss all of the direct exposition. It's as they intended and most people do. The intentionality of their world still draws people in and gives the world a sense of groundedness that keeps people coming back and separates it from the pale imitations. It's more than them being good at 'Vibes'.
The environment is built on the bones of a greater ongoing narrative that is intentionally obscured, even from the player who reads everything.
Dark Souls is a world in a constant cycle of Rebirth, Decay, the struggle against Entropy. All civilizations, at the end of the series, are stacked one upon another in an endless expanse of ash and dust as you bear witness to an permanent eclipse, a fading star, as time itself dissolves and the last fire fades.
Before that heavy handed stuff though, the simple matter of the direction your character travels reinforces these motifs. Down to the deepest depths and you'll witness what remains of the first civilizations. Climb up and you see the desperate attempts by the powerful to impose a false order that they hoped could forestall the inevitable.
You can even shatter the illusion of a golden order in the first game if you find the extremely missable secret boss. It couldn't be any more clearly 'said' if you were interested in paying attention.
Adding an AI model to explain or 'improv' the story of the world would destroy the whole purpose.
There are a lot of areas where predictive models make sense in the robotics stack, but doing it with "video world models" as is trendy this year is likely a bet in the wrong direction according to the evidence we have been amassing in the last 6 months.
In reality, I don’t see any of this trending towards the theoretical happy path everybody always talks about. Most people give up trying to find something good on Amazon and just buy whatever vaguely plausible knock-off garbage shows up in the first few search results. Most people just take any job interview they’re offered even if it sucks. Most HR people don’t use it to enhance the quality of their decisions — it replaces their decision-making roles in many respects.
I’m an art school graduate and talk am in many art discussion communities. This is causing a massive industry-wide morale crater. In any sort of art, it damn near eliminates the reward of craftsmanship in favor of marketing useless trend-of-the-week bullshit. Far fewer people enter a market that can’t sustain them. The idea that this is going to create ‘more artists’ and therefore that must mean there must be more skilled artists is fantasy. The skills you learn by prompting are not even on the same track to learning how to create things yourself. You essentially become a high-school intern acting as an art director, commissioning pieces. It’s instant gratification for people who don’t care enough about something to learn how to do it for real.
Proper microwaving is what gave rise to the entire concept of "fast casual" restaurants, famously AppleBees (or "club B's" in the late night focus iterations!)
Complex entrees that could be partially cooked and frozen. Then rapidly microwaved on a custom program that varies the timing and intensity of cooking. Then finished on a grill or conventional heat source for less than 1 minutes.
Microwaving food generally produces a lower quality finished product. But you can take a similar approach at home. The short cut is to just double the cooking time and cook at 50% power. Then throw whatever the item is in a preheated pan for about 1 minute if it's applicable. Other variations are possible too, I air fry finish most things like chicken nuggets, tater tots etc and the difference is considerable while still offer a significantly reduced cooking time.
I guess what I'm saying is: Couldn't a world model with targeted training and thoughtfully tuned system prompts be directionally similar to the layered systems to produce specific patterns of outcome?
I don't think these will create "artist" in any sense, but I do think it will lower the barrier dramatically for people creating games. Most people will interact with it like Lieutenant Barclay interacting with the holodeck, doing little more than wish fulfillment. But I think a few people will be able to interact with it in ways that create art.
In no way am I implying that the net net of AI will be good for humanity as a whole (I think that is too big a question), but I do think the power of World Models will probably result in a far more people being able to say "I have created a game".
I honestly don't have anything useful to say about what LLMs are doing to many human fields. I can understand how frustrating it must feel to see LLMs demonstrate superhuman "skill" (I don't really think they are skilled) at orders of magnitude less cost than a good artist. It isn't just that they don't seem to innovate (only permute), it is that they will literally take even the tiniest bit of creativity and novelty and immediately fine tune and create derivative works on any idea at scale. I can see how that might really demotivate any desire to push the boundaries of art for any human being.
The combination of "many", "most popular", and "nothing" is overstating it by a wide margin but for example the majority of the vegetation in games as far back as oblivion was procedurally placed.
I have used none of those recipes. The microwave is for making cold pizza 10% more palatable (or 80% more palatable if I've been drinking). In that regard, the LLMs are microwaves really works for me: if I'm using one I either I want something fast and casual, or I'm drunk.
I think it is interesting (though I only partially agree) that microwave meals require standardization to scale. Let's say that was true, why couldn't a modern microwave have a small camera and a set of heuristics for how to cook just about anything by turning the gun on and off at particular points when it recognizes a food? Maybe without intelligence, a microwave does need standardization; but we can put intelligence (ideally offline) in just about anything these days?
I wonder if with sufficient control if a microwave could ever brown? I wonder if it could reliably bake?
https://www.youtube.com/watch?v=2tg3-93jKvc - Chicken Good!