https://www.hollywoodreporter.com/business/digital/openai-sh...
They probably see how much Anthropic is absolutely crushing them in developer mind share (see, people who buy tokens) and want a piece.
But it was largely fun to try to transgress against the limitations. Who could trick the AI to generate something outlandish and ridiculous.
I can appreciate that the technology and research behind Sora could be helpful for many things, but I do not see anything good coming out of the consumer facing application.
I think OpenAI had a brief delusion that it could become some huge social networking app. The App was heavily modeled after TikTok..
There's a web interface as well.
After those first two weeks though, we just… didn’t use it again. The novelty wore off and there wasn’t anything really to bring us back. That was the real downfall of Sora.
Sora was the first product OpenAI shipped where I felt that fell into that second category, and for that I was very disappointed. You have all those GPUs, and the most incredible technology in the world, and the most brilliant engineers, and all you can think to do with them is to make an app that just makes meme videos? I mean, c'mon!
Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there? Even if Sora wasn't a spectacular success, it seems to me like subsequent model improvements could have moved the needle - shutting it down so soon seems premature. I mean, what if this is the equivalent of making ChatGPT with GPT 3?
Not a great look that either the teams responsible for Sora didn't know this was coming or the decision was so brash that things changed overnight.
In practice people would just generate the videos with the app then post them on regular social media in which case OAI would not get the ad revenue for that
Its the age-old "your product is just a subset of another product"
Coding is where the money is. https://news.ycombinator.com/item?id=46432791#46434072
I never understood what this app was about. TikTok (and I would argue most modern social media platforms) isn’t really about sharing things with friends, it’s about entertainment. Most people watch TikToks and YouTube videos because they are entertaining. Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?
- sora was not great at making what you asked
- i probably got 3 good videos out of 100 gens
- every video that was good needed editing outside of sora (and therefore could not be shared within sora)
just my experience
So strange that they fell behind after leading the charge on video from Will Smith spaghetti through the spectacular launch of Sora.
Turns out anyone can get that look by appending “like an Octane render”
Beyond that, like Kling and Hailou quickly surpassed them on product, and OpenAI never even attempted text-to-3d as if they are entirely uninterested in rich media.
OpenAI reminds me more of Meta than any other company. They’re both pioneering in their space and yet are mere commandeers (not innovators) when it comes to technology and importantly end user products.
They’ll also be extremely valuable, like Meta due to their ad product and ever-growing user base over the next 10 years, and I guess by focusing on code they plan to capture a segment of the developer market à la React or Swift.
Will OpenAI release a language or framework? An IDE? I bet the chat paradigm stays for the ad product and aging user base (lol) while the exciting innovation will happen in code automation and product development - an area they are not really experts in.
* It was (assumedly) expensive to run.
* It was not good enough for customers to seriously pay for.
* There were too many content restrictions for it to be fun for most people.
24/7 titillation is boring
I really thought he wasn't like the previous generations of tech leaders - as you mentioned OpenAI (with him in charge) seemed to be genuine about making a product that could improve people's lives.
He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too.
Then they drop this and it just doesn't gel. So much of what they've done since has just doubled down on the Zuck-esque scumminess and greed too.
Part of me still sees Dario as genuine in the way that Sama seemed back in 2024, but I'm sure once he has enough investor pressure he'll cave the same way too.
I think they are in serious trouble, especially with the size of their cash burn. Their planned IPO could easily turn out to be their WeWork moment where the bottom suddenly falls out on the valuation if they cannot make their operation look more like a real business before investors lose confidence.
ChatGPT is an interesting product - I like it for certain things - but after last year's PR scramble almost all the news out of OpenAI is a disappointment, with hovering hints of retrenchment.
Kind of insulting to lump google in with XAI? Like, is anyone even using XAI other than backwater government agencies?
I feel like they are sailing into a red ocean with what look more like copycat tactics than innovation (e.g., Codex v Claude Code; Astral v Bun)
I’ve given it different levels of open-endednes, give this flow chart an aesthetic like this mechanical keyboard, or generate an SVG of this graphic from a 70s slide show, but it never looks quite like what I have in mind.
In the end, I think you only use this stuff to generate images if you’re prepared to accept whatever comes out on approximately the first try.
Weil's now heading "AI for Science": https://www.pymnts.com/personnel/2025/openais-chief-product-...
Disney Exits OpenAI Deal After AI Giant Shutters Sora
https://www.hollywoodreporter.com/business/digital/openai-sh...
A source familiar with the matter tells The Hollywood Reporter that Disney is also exiting the deal it signed with OpenAI last year, in which it pledged to invest $1 billion in the company and agreed to license some of its characters for use in Sora.
“As the nascent AI field advances rapidly, we respect OpenAI’s decision to exit the video generation business and to shift its priorities elsewhere,” a Disney spokesperson said. “We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators.”
Also "exit the video generation business" seems somewhat notable, suggesting they're not just planning to launch a different video gen product to replace Sora?I used to think they were pretty clever but with this news and other recent ones (Jony Ive project cancelled, Stargate scaled down significantly, their models inflating token use on purpose) they just seem schizo.
The issue is that Sora ended up getting the short end of the stick: by generating the footage, it became the primary target of complaints. Meanwhile, they were forced to remove the videos, but people simply took those videos and uploaded them to random social media platforms like Twitter, TikTok, or YouTube, which ended up hosting the content while being much less of a target, since the content wasn’t generated there.
Honestly, I think the only way forward will be to wait for local models to become good enough so that you can run something like Sora locally and generate whatever you want.
I actually thought the Sora app was promising at launch, at least on paper, but it seems like they failed to keep people's attention long term. With the failure of Sora i don't think they have good options left.
Offerings like Kling and ByteDance are considered much better.
It says a lot about the current economy that consumers have no money. Will companies just stop making consumer products?
Let’s be real: OpenAI is circling the drain.
The company with the fraudster serial liar CEO who said he was gonna spend a trillion dollars can’t keep a video service alive right after signing a $1 billion dollar with Disney?
What kind of a joke is that?
This is a company that has blown its opportunity twiddling around with zero product. They still just run a plain chatbot interface with zero moat and zero stickiness.
There’s no “pivot” for a company that is in this deep.
Any platform which focusses on AI generated videos is doomed.
May be. OpenAI shuttering Sora is line with them shifting focus towards b2b sales, instead of b2b2c or b2c.
Interestingly, Aditya Ramesh, who iirc was the Sora 1 lead, is now "VP of Robotics" at OpenAI per his Twitter bio: https://x.com/model_mechanic
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
The AI tools that do stick are almost all embedded in existing workflows rather than standing alone. Cursor works because it lives inside the editor you already open every day. GitHub Copilot works for the same reason. You don't decide to use them, they're just there. Sora required you to decide you wanted to make a video, which is a much higher intent bar.
The apps that survive the novelty cliff are the ones that solve a problem you have on a recurring basis with zero extra activation energy. Most creative AI tools solve problems you have occasionally, enthusiastically, and then not at all.
> This is the right question but hard to answer in practice ...
> The brownfield vs greenfield split is the real answer to ...
> The babysitting point is the one people keep glossing over ...First it looked like it was crazy inventive, good at writing snappy dialouge, and in general a very good font of ideas.
Then the same concepts, turns of phrase, story ideas kept reappearing, and I kinda soured on the concept.
I haven't done it in a while, but that kind of usage really shows the weakness of LLMs - if you keep messing with its generations, editing what it made, and as the context length keeps increasing, its more end more likely it goes into dumb mode, where it feels like talking to GPT3, constantly getting confused, contradicting itself etc.
You didn't at least puff a little ack through your nostrils for that one?
It's not an exaggeration to say that this is how millions of people use Facebook. It might be not how most HNers use it, but create a new account and you will be absolutely funneled toward prolific producers of video-based AI slop.
But the problem is that FB and Tiktok (and to a smaller extent, YT Shorts) have cornered the AI video doom scroll market, and no one really seemed to be inclined to use Sora and related models for anything more creative. Which probably made it not worth subsidizing.
Will be interesting to see.
As it stands today, AI video generation tools like Sora suck up useful energy and produce things that are useless at best (throwaway short form videos), and harmful at worst (propaganda, deepfakes).
Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.
1. OpenAI killing off their own products aggressively, taking a page from Google’s book. (I think the way you meant it)
2. Products/companies that no longer exist because OpenAI, or AI in general, made them obsolete. (My first instinct when reading it)
Sora had all of the downsides, and attracted all of the scrutiny. Local-first is definitely the way.
Idk if it’s because I set codex to xhigh reasoning, but even then it still seems way higher than Claude. The input/output ratio feels large too, eg I have codex session which says ~500M in / ~2M out.
Never once did I bother to browse videos made by others on Sora itself. I wonder if anyone did.
sir, have you seen tiktok?
Sometimes people want to paint, sometimes people want a painting.
To have wonderful time with their mom… I bet they had absolutely zero interest in the act and process of making silly videos.
My guess is they over committed server/energy resources, since they were generating ~30 images per frame of 1 second of video for results that may be discarded and then tried again.
Now that energy costs are increasingly less predictable because of the war, they're prioritizing what is sustainable. Willing to blow up the $1 billion Disney deal for Sora, because that's a popular IP that would have increased discarded server time.
It was legitimately fun until the IP guardrails came up and we couldn't do anything with the characters and culture we know.
If you look at US top videos on YouTube any given day, 40-60% of the videos are IP-based. Star Wars, Nintendo, Marvel, music, etc.
Shovel selling and instruments to dismantle whats left of working class power.
That narrative will implode like Sora later this year.
I don't know where they got September from; Sora launched in Feb 2024[0] which was a bit before people had become tired of awful AI-generated content. There was real belief that people would be willing to spend all day scrolling a social network with infinite AI-generated content. See the similar hype with Suno AI, which started a whole "musicians are obsolete" movement before becoming mostly irrelevant.
I think Sora 2 produced quite good videos, at least of a certain type. It was very good at producing convincing low-resolution cellphone footage. Unfortunately you had to have a very creative mind to get anything interesting out of it, as the copyright and content restrictions were a big "no fun allowed" clause, which contributed to its demise. Everything on the main Sora page was the same "cute animals doing something wholesome and unexpected" video.
My "favorite" part was how the post-generation checks would self-report. e.g. It was impossible to make a video of an angry chef with a British accent because Sora would always overfit it to Gordon Ramsey, and flag its own generated video after it was created!
[0] https://news.ycombinator.com/item?id=39386156 - only one mention of "AI slop" in the entire thread, though partial credit goes to "movieslop".
For example, early TikTok had the Boss Walk.
Sora had no big content trends split into many micro trends in some established ~universe.
SORA ( whatever that means) was one of the most astounding demos I’ve probably ever seen ( ChatGPT was more gradual ).
The shock and awe of rendered AI video blew my mind.
Yes months later everyone can do it and is bored by it and has strong opinions about what is right for society or not.
But it was a monumental piece of tech and I personally ( clearly incorrectly ) think the top comments should be appreciative of the release and the impact
Personally I think the lack of nudity destroyed the adult market But I don’t know enough tbh
Over time we're probably going to see some really broad and strong use cases of AI, but I think in the case of social media or generative content, we have to be a lot more thoughtful about it. And I'm glad that they're shutting down this app as much as it's great to see innovation and technology and to see how far it's pushed. I prefer to see it when someone like Google does it? Because they're really doing it from the standpoint of this has broad applicable applications to something like simulation or training. Not whatever open AI was doing which honestly just doesn't feel very truthful. I feel like they say one thing and do something else or they say one thing and the agenda or something else. And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it.
There’s so many video gen models out there and given the cheaper Chinese models I’m not surprised they closed this down. Besides the initial push, any marketing regarding video gen has always been the Kling or Higgsfield models. Just never a reason to do sora
Sora was a perfect example of using a lot of compute to generate the video -> we need a lot of GPUs -> a lot of RAMs -> energy and land
I am predicting in the next 6 months RAM shortage will soften, not too much, because war in the Middle East will have additional impact for some time.
Or before! Either is mandatory to actually learn the content.
Might be why the latest Iran propaganda video could be created in PowerPoint: https://bsky.app/profile/rachelbitecofer.bsky.social/post/3m...
I'd rather eat poison
Generating pointless AI videos for pocket change or ad revenue is a loser in comparison.
My experience with AI image generation is similar, although with a higher success rate (depending on how accurate you want the result to be); but indeed, filtering is a major part of the process.
A lot of YouTube content is really talk, so it was easy to create Sora videos as video content while you talked over them.
However, its failure was that it watermarked everything. WTF? Leonardo didn't do that. Neither did other models. So while video gen was excellent, you always had these ridiculous floating watermarks.
But now that the deal is off, I'm sure their legal team will attempt to once again change copyright law in their favor.
And two at Meta[2]: "A rogue AI agent at Meta took action without approval and exposed sensitive company and user data to employees who were not authorized to access it"
"director of alignment at Meta Superintelligence Labs, described a different but related failure in a viral post on X last month. She asked an OpenClaw agent to review her email inbox with clear instructions to confirm before acting. The agent began deleting emails on its own."
Even Elon Musk has shared the wisdom to proceed with caution! [3]
1. https://dev.to/tyson_cung/amazon-lost-63m-orders-after-ai-co... 2. https://venturebeat.com/security/meta-rogue-ai-agent-confuse... 3. https://x.com/elonmusk/status/2031352859846148366
Where can I get this data?
I've no doubt that content creators outside of social media were using it as well, either for their brand or other video work.
Yes we see AI reels all over the place, but that's not only what it was used for
> In February 2024, OpenAI previewed examples of its output to the public,[1] with the first generation of Sora released publicly for ChatGPT Plus and ChatGPT Pro users in the US and Canada in December 2024[2][3] and the second generation, Sora 2, was released to select users in the US and Canada at the end of September 2025.
[0] https://en.wikipedia.org/wiki/Sora_(text-to-video_model)
If I see an AI video and my options to participate are… prompt another AI video? What’s the point
What would you place here anyways? Chegg and Stack Overflow?
I'm no fan of Altman or OpenAI, it's a pretty shady company and I am suspicious of their books, but this was a great demonstration of the uselessness of boards and how out of touch they are with the business they are supposed to be supervising. It's really rare to find an effective board, primarily they sit like a House of Lords enjoying ceremonial perks and a stipend in exchange for holding a few meetings a year.
Sometimes I'll take deep research output and listen to it too that way.
As we've see from Grok, building the system for producing non consensual nude images of other people will get the legal and PR hammer brought down on you fairly quickly. It's just an incredibly unethical thing to do.
The impact of easy AI generated video is a less certain and less secure world. You can't trust your eyes anymore because of how fast and easy it is to fake video and moments. You can't trust communications with someone because how easy it is to impersonate them over video and voice. Scams involving tools like this are already running rampant and it will only get worse. The sheer level of distrust these tools have unleashed into the world makes me wish they never existed. They have burned millions (billions?) of dollars on this when that money would have been better served going to the creators whose work they stole to build it. It's rotten.
They're not, they just already have the habit formed with the place they go to do that. Ultimately anything worth seeing on sora will be reposted to Tiktok.
When it does, it's more likely to be something popular and unoriginal, where the data is dense, and less likely to be something inventive and strange.
Step 2: win back public trust by firing Sam Altman or dropping defense contracts or something else I can’t think of.
I also wonder if they got the $1B from Disney? Was that even a paid for deal? Or just another "announced" deal? Every article I found doesn't mention anyone signing any paperwork - which seems to be typical of AI journalism these days. Every AI deal is supposedly inked but if you dig deeper, all you find are adjectives like proclaimed, announced, agreed upon.
Want to hear the one TRICK most people forget when doing X...?total disagree.
if you put vid gen in the hands of regular people then regular people get super-powered in that they begin to recognize the frame pacing, frame counts, and typical lengths and features of an AI video.
Do you know how many people have cited AI videos in this war? We'd all be better off if all of us were betting at spotting fakes rather than allowing the fakes to illicit hardcore emotional responses from every peon on the street.
It used to give me precise answers, "surgical" is how I described it to my friends. Now it generates a lot of slop and plenty of "follow ups". It doesn't give me wrong answers, which is ok, but I've found that things that used to take 3-4 prompts now take 8-10. Obviously my prompting skills haven't changed much and, if anything, they've become better.
This is something that other colleagues have observed as well. Even the same GPT5.4 model feels different and more chatty recently. Btw, I think their version numbers mean nothing, no one can be certain about the model that is actually running on the backend and it is pretty evident that they're continuously "improving" it.
I dont do design, or make videos, or ask ai for legal advice, or medical advice cause I lack the skill and understanding of these fields. Dunning Kruger still applies...
There is interesting "AI" content out there, clearly the person(s) behind it put some thought into it and had a vision.
Read the main comment out loud to yourself while imagining it’s someone sitting at a table at a pub.
Now imagine someone turning to this person in the pub, and speaking the subsequent comment, word for word.
No seriously, try it out.
https://finance.yahoo.com/news/openai-sora-app-struggling-st...
If you are autistic, I feel that it causes you to see reality a more accurately than most here on this thread.
I also use ChatGPT as my default search engine and to help me learn Spanish.
But image generation and video generation were a nice parlor trick. But wasn’t useful for me except for images for icons for diagrams.
But light you said, porn makes money and there are people who pay $300 a month for Grok to generate AI Porn.
So far that’s been exactly it. Now AI generated videos are primarily used to scam, deceive, and ragebait.
The addictive toxic content will go the way of tobacco and explore new markets.
Back in 2010 around 11% of the population of Indonesia was connected to the internet. Currently it's closer to 80% - largely via mobile phones. That's approximately 200mln new users.
Nigeria and Pakistan are going through the same change, just started later.
Since 2016 India alone added more users than the mentioned countries combined.
That's a lot of first generation users. More than the entire western population.
I am not convinced. Nobody is making money, every player is losing money hand over fist.
In this case, maybe not enough to offset the costs; or maybe it just wasn't addictive enough. But it's still early days.
Is it?
I have the impression GenAI deteriorates the internet both from a content and tech perspective.
Bots that waste your time because they don't work well or because they are pushing an agenda, and low quality content that floods social media from people who want to make a quick buck.
GitHub and AWS became increasingly unstable. X, Instagram, and WhatsApp are suddenly sprinkled with subtle bugs.
Everything just got faster and we got more of it, but nothing of it is good anymore because everyone tries to replace 90% of their work with GenAI instead ofmaybe starting at 10-20% and then add more when you're sure it works.
You will have an agent like your seo expert, this agent will be able to use common tools like google seo, facebook seo etc. and you will teach how you want it to do its 'job'.
You will have a way of delivering your requirements to it, it will run in the background, might ask for feedback but will otherwise do stuff similiar to whatever person was doing it before.
There might be some transition phase like verifing the data of the real person vs. the agentic ai then moving over to only validation until the agentic agent is in avg as good as a human. Then the human will be gone.
Agentic will take basic support tasks (its actually already doing this) first, then more complicated things etc.
For this we need an ecosystem aka the agentic ai platform, interconnect between agent and tools and this stuff is currently getting build by someone one way or the other.
On scale we need more capacity and these agents will also cost more money than a 20$ subscription.
But if you have a, lets say SAP agent, it will be build once, trained once and than used by everyone. Instead of a person using a HR system or billing system, the agent will bridge the gap between data and system.
I like the framing of trying explosive things to escape the pull of gravity. When applied to rockets, it means a lot of stuff blowing up, which again seems apt.
Having Disney on their side was def quite a smart/interesting move.
At least from one interview, they def had resource issues last year and teams had to fight for it. Can easily be that sora was always priortized down and they realized it doesn't make sense to spend that much capacity while then not being able to push their main model.
This did happen once. 3 people were laid off, I think directly based on things I said to drive the completion of some automation. That was the last time I ever measured something in man-hours to make a point. I’ll never do it again. That was over 12 years ago.
Then of course the hype collapsed and now even the usecases where VR shines are deemed a flop. But no, it's exceptionally good at simulation (racing/flight) and visualising complex designs while 3D designing.
I see the same with generative AI and LLM. It's really good with programming. It's definitely good at making quick art drafts or even final ones for those who don't care too much about the specifics of the output. I use it a lot for inspiration.
But it's not good for everything that it's trying to be sold as. Just like the VR craze they're dragging it by the hairs into usecases where it has no business being. A lot of these products are begging to die.
For example an automation tool using real world language. For that it's a disaster, it's inconsistent and constantly confuses itself. It's the reason openclaw is a foot bazooka. It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
I don't think AI will disappear but a realignment to the usecases where it actually adds value, yes I hope that happens soon.
https://variety.com/2025/digital/news/youtube-trending-page-...
Bummer. It used to be at:
https://www.youtube.com/feed/trending
So last year, these were the top videos:
https://web.archive.org/web/20250324155132/https://www.youtu...
There's this, but it's nowhere near as good as seeing the actual videos:
If it cost too much and others can do it cheaper, that looks bad from both fronts.
If I may make an analogy, it would be like looking at rich corporations dumping toxic chemicals into our waterways, and saying "wow I wish I could dump toxic chemicals in the water too, not fair!"
The point is that if a rich person wants to do it, my only hope is that they have to spend a significant amount of their resources to do it, and that there would be immense negative social pressure against them when they do.
The resources (money, energy, opportunity cost of engineering time) put into AI video generation are better spent elsewhere. Not pouring resources into it would hopefully stunt its progress, making AI generated propaganda lower quality and easier to spot.
Yeah, marketing. Which is a huge market...
Having said that I absolutely hate the audio format, I only used it when I had to drive or when I swam lanes. But these days I do neither.
Your reply is more interesting. Hence my (albeit maybe snarky) chiming in. So the original comment does end at a very specific app/sora related conclusion. "Sora didn't keep us coming back."
If I may amend your scenario: imagine this bar is actually in the center of SF or across the street from Open-AI or whatever. We're on HN discussing a post on X about Sora.
The appeal to humanity is not wrong. My point is more let's keep the connection with that humanity in relation to AI, to Sora, to what's going on in this forum.
Most people serious about this stuff usually have their own pipelines.
Big IP is strong arming OpenAI, Suno, and all the rest.
It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands and IPs at a fast enough rate to displace the lack of being able to use corporate IP.
I also think the lawyers at the MPAA, RIAA, gaming industry, etc. will ultimately require all of social media to install VLMs to detect if their properties are being posted. Forget generation - that's hard to squash - they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses to their characters and music. We'll see cable TV era "blackouts" when a social network has to renegotiate their IP license.
People really wanted to use Sora for about a week. After the app/model debuted, they lost the ability to generate IP within the first week. The interest faded almost immediately. The same thing happened with Seedance 2.0.
People want to generate IP.
edit: clarity
Obviously caveat emperor but there are a lot of real world scenarios like this.
I think Anthropic and OpenAi are trying to all cool and apple-y with their branding but these use cases are just tools getting work done. Most normal people don’t need or want AGI, or even AI slop videos. They just want their invoicing system to just f-ing work for a change.
No they aren't. Any decently skilled human blows them out of the water. They can do better than an untrained human, but that's not much of an achievement.
If you consider how the reading, audio, and video you consume either builds or degrades your capabilities and character, as the food or poison you consume either builds or degrades your physical health, then [looking at US top videos on YouTube any given day] literally IS taking poison for your mind.
Depending on the poison and the dosage, eating the poison for your body instead may be the lesser of the two evils.
Most People do not care about the technology and frankly they don’t want to know about it. They want great experiences. That’s it.
Technologists seem to have a reallyyyy hard time getting it.
Not every place has LEGO incest porn… or whatever the kids are into these days.
It's not just dirty talk. It's a whole new paradigm in verbal filth.
On the topic of sora, though: current models are astounding. I watched a clip of Leonidas, Aragorn, William Wallace, Gandalf etc. all casually riding into a generic medieval town together, and if you showed that to me a few years ago, it would have seemed like magic. We're not far off from concerts featuring only dead artists, and all video and image testimony becoming unreliable. Maybe Sora was a victim of timing or mismanagement, because I don't see how this isn't still a seismic shift in the entertainment industry.
I wish we could use something like a simple DSL rather than English prose to work with these models, in order to have some real precision to describe what we want.
Or, it's a clear signal that AI video is too expensive as a consumer product and/or not quite yet at a quality bar that the average person finds acceptable.
I think someone could have looked at computer graphics and SFX circa the '80s and decided that they would always pale in comparison to practical effects. And yet..
It's an annoying trope, but this is the worst and most expensive (at this quality level) that these models will ever be.
Sure, I can write the screenplay and Veo will generate it for me. But I don't have experience in video creation/production , so it is difficult for me to write good prompts which generate engaging video
which is what I would hope would happen, but they're probably fine not thinking about the consequences of their actions looking at their 7 figure salaries
I'd like to know what self hosted models they've been using, if any, and who provided them, trained on Lego IP.
Or the novelty wore off in about a week, and then after that it also became harder to generate videos of baby yoda at Westboro Baptist Church protests
We learned two things from this debate:
1. What most people hated was actually just “bad CGI”. Good CGI went entirely unnoticed.
2. A generation of people were raised with CGI present in almost every form of professional media (i.e. not social media). They didn’t have a preference for practical effects because the content they consumed didn’t really use them.
I expect the same thing to happen here. I don’t think many people want to consume AI generated content exlusively (like Sora’s app attempted). However I expect AI generated content to continue to improve in quality until it’s used as a component in most media we consume. You and I will eventually stop noticing it and kids will be raised with it as normal and the anti-AI millennials/GenX crowd will age-out of relevance.
I really don't see the argument for this tech to be any kind of good, unless you think moving into an era where you cannot trust any image or video is somehow a neutral outcome, AND are happy about the people who are in control of this tech. which I guess captures a larger part of the HN crowd than I'd hoped
Did you just make that up?
Grok barely makes "M-rated" nudity, let alone porn. Musk recently claimed it can do "R-Rated content", but his post got a community note saying otherwise.
Me: damn that’s cool …………AAAAAHHH HELP ME
Short form video is a special kind of crack. I see even old people getting hypnotized by it. And even worse, they're terrible at determining if something is AI.
Media like YouTube isn't consolidating because that's what people want, it's because that's what YouTube and IP holders want. They want death to people like Boxxy, and they want you to watch VEVO instead.
Nobody ever really solved making CRUD apps easier through better frameworks. So now we have a tool to spit out framework gunk, and suddenly everyone can have their own app.
I find all of it lame and cringe, so I downvote all of that. However stuff still sneaks by…
Yes, literal weapons are bad, too. But that's not the current topic.
That will likely happen in the specialized fields. We can already see tools like Figma, Mira, and others that generate functional-ish frontend components in full typescript and corresponding styles (that are also selectable and configurable in the interface). Though, not quite as free, since they do load their base framework and components to ensure consistency and sanity / error-checking, etc., but even then it is in fact generating you useable, modifiable components that you can engage with in precision in your normal DSL.
For video, this likely exists, or is being worked on as we speak. All specialized domain tools will go towards this model to allow those domain experts to use the tools with the precision they expect AND the agentic gains we already take for granted.
The other one is TV ads/cinamatic ads. For a 30 second clip expect to pay an agency $5-10k. Within a couple of days, I can make a video ad and have like $50 in api costs. Cost of production is so crazy in marketing.
Obv this is under the assumption ai is good to do either of those things. Which it hasn’t so far, best I’ve gotten is doing b-roll shots to stick together for an ad
1. There's an AI-based virtual girlfriend industry that mixes text and images
2. There's an AI-based virtual boyfriend industry that is essentially all text (and not always distinguishable from the normal chat models)
3. There's a much shadier AI-based "undress this specific woman" industry
This is a "seismic shift" in the sense of the Big One hitting California. The knock on effects of trust erosion caused by AI are going to huge and potentially unrecoverable.
It opens the precedent for those creators to now also hold these companies responsible. That’s not a bad thing under the current legal system in this way.
Also, seeing genuine original creations created with AI assistance is much more interesting to me
s/emperor/emptor
I hope your friend's company spends $20K to harden the deployment of the new app so it doesn't become a deep liability.
I think it factors into why public perception is increasingly anti-AI. It'd be one thing if people were losing jobs, but on the other hand, their daily chores were done by a robot. Instead, people are losing (or fearing losing) their jobs, while increasingly having to fight with AI chatbots for customer support and similar cost-center use cases.
It's like AI is the "high fructose corn syrup" of tech. Nobody's arguing the output is better--it's just a lot cheaper and faster to get there, so that's its legacy. Making things cheaper and worse.
Grok has gotten a lot stricter about video from uploaded images. But it is still able to make realistic x rated porn from AI generated images it creates.
There are various jailbreaks that have been working for the longest and still work, just a brief look, half of them just involve “anime borders” and “transparent anime watermarks” over videos.
GenAI has presented tangible proof of such risks and is forcing society to reevaluate the way we trust evidence. In my eyes, it serves as an opportunity to improve our foundations of trust to something that relies less on the good will of random authorities onto something more objective.
Also, I haven't really seem anyone celebrating the large corporations who control AI tech. Could be simply the people I'm involved with, but most AI enthusiasts I've seem are more about, at least, open-weights AI models.
But I'm not sure we would even notice nowadays. It used to be a disaster that could take people's attention for years, but currently, it may get lost in the noise.
Saves the company a ton of money
Who will be held responsible when an AI agent messes up the HR system and the company is exposed to losses due to a mistake? Who is going to be responsible when your SEO agent overspends?
Ultimately, it's going to be you most likely, because I can't see AI firms taking this responsibility.
You might argue that right now it also falls on the employer, since employees are rarely held responsible for genuine mistakes, even if it ends in disaster, however you have a lot of agency over what an employee is doing. Their motivation is generally correlated with doing well, because past success ensures future career growth.
An AI agent has no such incentives. The AI company will just charge you some minimal fee to provide the service, and if it messes up, will wash their hands of responsibility and tell you that you should've been more careful in using it.
I dislike Taleb for various reasons, but using AI agents is basically the definition of a fragile system. It works 99% of the time, lulling people into this sense of security where they can just offload all their work very conveniently. And then 1% of the time (or 0.01% of the time), it ends in utter disaster, which people are very bad at dealing with.
Take Uber as an example: yes they've raised prices to become profitable, but not to the insanely profitable levels they could if they had a true monopoly. People will stay on Uber when the competition is still at a roughly equivalent price, but will switch if Uber raises its prices enough.
Uber Eats is different, since its a 3 sided market where the cost is paid by the restaurant rather than the user.
AI appears it's going to be more like Uber the car service. Claude can charge $200/month, but charging $2000/month seems unlikely to work. I'm sure many would be willing to pay $2000/month if they had no alternative, but there are alternatives.
I think it turns out they don't, not really anyway. And that's exactly why Sora is dead. They figured out that addictive AI slop has been so thoroughly commoditized that you can get it on a ton of other platforms for free, so people don't want to pay for it.
Not to say it's a hallucination, but, to modern standards, if this were publicly funded research, it seems like it would have been a gross violation of ethics or other non-technical criteria. Interested to see how people think of it in later years, e.g., now.
https://www.forbes.com/sites/martinadilicosa/2026/01/09/grok...
It reeks so much of desperation. They know they are running out of goodwill and money at breakneck speed. They are just flailing and throwing shit against the wall to see if anything sticks.
Which is usually back to back with the thought that in bygone times "the human mind used to be cleaner / healthier / smarter and it was slowly destroyed by modern living"
There's not that much difference between our behavior and that of a chicken fixated on the chalk line in front of it.
Some of that is seeking to kill competitors before they can get established. That's normal and has been around for generations, if not since trading was invented.
But most of what we've seen during the "enshitification age" has been to burn money until you achieve a critical mass of users. However, this only really applies to social platforms where the point of it is communicating with people you know. That's the lock-in. You convinced Grandma to join Bookface and now you feel bad leaving if she doesn't leave at the same time, and more importantly, who wants to join Google Square if nobody else uses it?
That's not going to work for AI platforms.
What I do see potentially working is one method that email platforms use to lock in users: having tons of data you can't export/migrate. If you spent lots of time training your AI by feeding it your data, that's going to make it harder to leave.
So far none of them have capitalized on this (probably due to various technical reasons) but I expect it to start eventually.
Doesn't matter if you agree that would happen, the analogy is valid - you're essentially admitting that you're ignoring the negative impacts of the tech for the sake of how impressive it is.
I hear you but at least as my bud described it, the software that most of the timber mill industry uses is buggy as hell, crashes all the time, and makes mistakes. One would wonder if even the licensed software is hardened.
Video gen is going nowhere, and there's already models out there with less safety measures. So there's no RIP to the evil product.
I just have the feeling that it doesn't get the job done anymore.
I hope we will see the rise of alternatives.
The great disappointment about how all of this is marketed is what AI should be good at doing - enhancing a tiny budget - is all but forgotten. I don't want a video of Pikachu fighting Doctor Strange, I want some weirdos fantastical horror movie that he could never get financed, but was able to green screen and use AI to generate everything. I don't want a goofy top 40 country song full of silly lyrics, I want musicians to use AI to generate new sounds as part of composition.
In the same way that there's a difference between vibe coding and using a coding assistant...
The best part is is that they'll get popped because of it and have zero clue. Anyone building in any frontier provider currently, but has little background in software, is creating all kinds of new liabilities that didn't exist before.
In a school district where I live the IT department developed a password distribution app using Gemini on Google App Script (they didn't even need this part), sent out links with B64 encoded JSON that included: student name, student email, parent email and student password. Yet, when I found it and told them all the ways that it was technically a breach in our state they ran to their 2-bit "cyber security experts" and "legal". They were far more concerned with CYA than understanding the hole they dug themselves. And all of the advice they got back was that it wasn't a breach. They claimed their DPA with Google protected them. I explained how email works and they just ignored me, likely because in our state they are bound by GDPA and won't ever engage in a legitimate conversation via email.
The kicker here is they pay for an IDP with built-in mechanisms for password resets (that was the reason for building this: to reset students passwords). One of their cyber security "experts" (a lone guy who has zero credentials from what I found) told them that password resets using the IDP was "not recommended". When pressed on that they were, again, silent.
LLMs are creating a huge mess for people now empowered to go well beyond their capabilities and understanding. It's a second coming of the golden age of shitty software that's riddled with even the most basic of security flaws.
So it is stated, but is it actually true? I am not convinced.
Besides, it's not as if they can suddenly stop training models, the moment you do that you've spelled a death sentence for profitablity because Google and open source will very quickly undercut a 15 year break even timeline.
Encoding more rules, more precise rules and alerting a human in case it thinks its off. Like salary increase by 20% gets flagged automatically. Revenue drop bey x % too.
It could even go so far that the maker of these systems will insure you for their use.
It just needs to be cheaper than all the humans in the loop and if you train it once, you can copy it unlimited time. Scaling effect of software for tasks we need to train a human again and again.
It could also be agent systems which do this. Like a company building and designing the HR USA Healthcare agent specialized in SAP HR. Another one for HR Brazil Healthcare agent specialized in another HR software.
Humans are really expensive and you have to train them regularly and every single on of them.
I like to call this the "Yahoo Effect"
What happens when other platforms start trying to get people to pay? I think there's a race to find a revenue stream for this stuff. As soon as one company can find a way to monetize it, they'll all end up doing it. Right now, we're in a place where companies are losing so much money, they have to decide how much they can lose before they pull the plug.
OpenAI just proved you cannot burn money indefinitely.
I let my kids have access to the app in the hope they would be inoculated against being obsessed with AI video and it actually worked. They got bored in like 2 days.
It simply doesn't compare well with handcrafted short form videos that are already plentiful on TikTok (which I absolutely don't let my kids watch).
In a sufficiently isolated population, you get the same effect from a sound-making greeting card, or a battery powered light and/or sound toy from a carnival.
And for what it's worth, tomorrow they don't miss whatever “indistinguishable from magic” thing, so no harm done.
// grew up near such areas
So they need to be able to do image generation, for which they need image data. They also need to be able to analyze videos for more and better training data like learning or teaching there models from yt and other sources.
So they have image generation, image dataset and video dataset. Its not far fetched ata ll or desperate to leverage this base for playing around with video generation.
And despite how much money they burn, for a company that size, trying out video generation wasn't that high of a goal post.
I'm really surprised by there move and can only imagine that the progress of other models from google and antrophic pulls their teeth and no longer want to invest the compute (not money) to leverage their compute for their main models.
Coincidentally, bringing your own address that can be migrates away is somewhere between impossible and expensive.
I have said about 3 times I am solely judging tech by how impressive it is technically.
I have no idea who you are arguing with.
Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.
Yes, revenge porn is very effective at causing harm, even though it can be generated.
No, because 'plausibly deniable' has never worked for social consequences and shame.
https://www.cbsnews.com/news/sextortion-generative-ai-scam-e...
revenge porn or deepfakes in general are hugely harmful to people.
in the german-speaking world there's a scandal right now about a husband creating deepfakes of his wife, https://www.hollywoodreporter.com/movies/movie-news/christia...
> One fake video, which she claims was sent to 21 men, depicted her being gang-raped
i think you're taking this topic lightly because you just assume that it's not a big deal. try to keep in mind that people's mental health and with this their life is at stake.
as with lots of things, the problem is not the tech itself, but the existence of men. it's not all men, but it's usually men. not sure how we'll solve this issue.
As a onetime semi-pro musician, with decades of live performance and sound design experience:
I would rather burn my beloved instruments publicly and pee on the fire.
Just because one thing is a lesser/different kind doesn't mean we can't also be vigilant about it as well.
OpenRouter makes it easy to use them, just add credits to your account.
I thought this was common knowledge to anyone looking to use an inference API, but it seems it isn't. Well, even AWS is in this business with Bedrock.
That alone is huge, if they let go of their egos about putting the entire white collar class out of work..
Because few people really care much about the commodity hosting world. They're not making waves, they're just packaging things made by others for a low-ish cost. They're also not very consumer-focused, as they're a bit lower level than what most people prefer to think about. It doesn't mean they don't exist or that they're not profitable though, just not headline-reaching numbers in the end.
The world is too much with us; late and soon, Getting and spending, we lay waste our powers;— Little we see in Nature that is ours; We have given our hearts away, a sordid boon! This Sea that bares her bosom to the moon; The winds that will be howling at all hours, And are up-gathered now like sleeping flowers; For this, for everything, we are out of tune; It moves us not. Great God! I’d rather be A Pagan suckled in a creed outworn; So might I, standing on this pleasant lea, Have glimpses that would make me less forlorn; Have sight of Proteus rising from the sea; Or hear old Triton blow his wreathèd horn.
https://www.poetryfoundation.org/poems/45564/the-world-is-to...
I was talking to other people re: difference between code & other domains. Code is, for customer, what it does.. not how it does it. That is - we can get mad about style, idioms, frameworks, language, indentation, linting, verbosity, readability, maintainability but.. it doesn't really matter for the customer if the code does the thing its supposed to do.
Many things like entertainment products don't work that way. For a good book/movie/show, a good plot (the what) is table stakes. All of the how matters - dialogue, writing style, casting, camera/sound/lighting work, directing, pacing, sound track, editing, etc.
For short format low stakes stuff like online ads, then the AI slop actually probably works however.
Same for say making a power point. LLMs can quickly spit out a passable deck I am sure. For a lot of BS job use cases, that's actually probably fine. But if it is the key element of a sales pitch, really it's just advanced auto-formatting/complete, and the human element is still the most important part. For example I doubt all the AI startups are using AI generated sales pitches when they go to VC for funding.
Nothing exists in a vacuum and the way technologies affect people living in the world is a fundamentally important aspect of the technology itself. To ignore them would be like celebrating a cool new engine design but overlooking the fact that it has a tendency to explode and kill everyone in the car. If the primary effect of a technology is human suffering, then it isn't cool!
Integrating AI with existing tools to improve productivity is harder and requires effort and investment...
> RIP to one of the most evil products I've seen come out of the tech industry in my lifetime.
I'm saying Sora isn't even in the top 100 of most evil products out of the tech industry.
I'll believe it when I see it.
Nano Banana created a lot of noise.
But the reasoning of Gemini 3.1 Pro is really really good. Its hard to describe how good it became. I do not see the same quality from openai. Openai though is also super fast in response. A lot faster than just a few month ago.
For example: some german guy used the wrong word in describing an advantage of having a silencer and missuesd a word. Openai just said its nonsense, gemini suggested that its a typo and he wanted to write something else (gemini was correct).
It could also be that we are in a moat between "why is AGI not here yet" and "we need to build now the agentic platform stuff, that takes time".
Gemini pro is def slower than openai and I do not know if its because I use the pro version of gemini but not from openai. But it could also be that OpenAI has to work on subagents because Gemini def uses subagents and i was not able to find a source that OpenAI is doing this too.
Right but I think a lot of these use cases aren't replacing any jobs because it wasn't anyones job. It's just a little polish on existing work (did spell correction in Word kill jobs?) or the stuff that voice assistants have been promising for 10 years.
A promotional flyer for an event could work perfectly well in plain text. The art is pure social signal - this event is thrown by the type of people who put art in a certain style on their flyers. Your eye is caught and your brain almost immediately discards the art.
Same with power point - you make a power point so that everyone knows this decision was made by the type of people who make power points. A txt file and a png would have gotten the job done.
Same also with memes - you could just _say_ a lot of these jokes, but they're funnier with a hastily-edited image alongside.
There's nothing inherently evil about a knife. Standing outside of a high school and handing a knife to every kid walking in is pretty evil though.
Could you use the bullshit machines to generate sounds that were nuanced, musical, and original, with enough time and effort?
Maybe. I'm not sure original is something they can do, but it's not totally implausible.
I would strongly recommend learning to use other tools for that purpose, instead of feeding the plagiarism monstrosities.
Taking away the precision, control, and serendipity afforded by modules and cables, or a programming language, and telling me "Just describe what you want and the plagiarism machine will spit out whatever correlates with that description on average" would destroy everything I love about synthesis.
https://www.zoho.com/mail/zohomail-pricing.html
A few DNS hosting companies still bundle in a few free email mailboxes with registration costs but that is becoming more rare.
I understand your entire world model is shaped by your past and that this machine is changing the fundamentals.
As an outsider to music, I'm excited that I have access to something I previously did not through the use of Suno and other tools. I'm excited that I can come in and just try things and not hit a skill wall or quality barrier that would cause me to quit with the limited time and effort a working adult has. It's something I've wanted to do for a long time, but just never had the time for.
Attempting to learn costs thousands of hours before you can even start to feel good about it, and I don't have that time. Life is short and I'm already thinking about the end.
I used to be sympathetic to folks with your view, but now that programming and engineering are impacted by this - I'm in the crosshairs too. I'm subject to the same forces.
I've decided I love this tech even more. Claude Code is a tool, just like all of these other tools.
This rising tide of capabilities is so awesome. This is the space age stuff I dreamed about as a kid, and it's real and tangible.
So no, I won't restrict myself to your set of pre-approved tools. I'm going to have fun and learn my way.
And it is fun.
You can keep having fun the way you like to. What other people do shouldn't be ruining the fun you have, and if it is, then you should reevaluate why you do it.