https://chatgpt.com/s/m_69e7ffafbb048191b96f2c93758e3e40
But it screwed up when attempting to label middle C:
https://chatgpt.com/s/m_69e8008ef62c8191993932efc8979e1e
Edit: it did fix it when asked.
Create a 8x8 contiguous grid of the Pokémon whose National Pokédex numbers correspond to the first 64 prime numbers. Include a black border between the subimages.
You MUST obey ALL the FOLLOWING rules for these subimages:
- Add a label anchored to the top left corner of the subimage with the Pokémon's National Pokédex number.
- NEVER include a `#` in the label
- This text is left-justified, white color, and Menlo font typeface
- The label fill color is black
- If the Pokémon's National Pokédex number is 1 digit, display the Pokémon in a 8-bit style
- If the Pokémon's National Pokédex number is 2 digits, display the Pokémon in a charcoal drawing style
- If the Pokémon's National Pokédex number is 3 digits, display the Pokémon in a Ukiyo-e style
The NBP result is here, which got the numbers, corresponding Pokemon, and styles correct, with the main point of contention being that the style application is lazy and that the images may be plagiarized: https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxaerni...Running that same prompt through gpt-2-image high gave an...interesting contrast: https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxaerni...
It did more inventive styles for the images that appear to be original, but:
- The style logic is by row, not raw numbers and are therefore wrong
- Several of the Pokemon are flat-out wrong
- Number font is wrong
- Bottom isn't square for some reason
Odd results.
That being said, gpt-image-1.5 was a big leap in visual quality for OpenAI and eliminated most of the classic issues of its predecessor, including things like the “piss filter.”
I’ll update this comment once I’ve finished running gpt-image-2 through both the generative and editing comparison charts on GenAI Showdown.
Since the advent of NB, I’ve had to ratchet up the difficulty of the prompts especially in the text-to-image section. The best models now score around 70%, successfully completing 11 out of 15 prompts.
For reference, here’s a comparison of ByteDance, Google, and OpenAI on editing performance:
https://genai-showdown.specr.net/image-editing?models=nbp3,s...
And here’s the same comparison for generative performance:
https://genai-showdown.specr.net/?models=s4,nbp3,g15
UPDATES:
gpt-image-2 has already managed to overcome one of the so‑called “model killers” on the test suite: the nine-pointed star.
Results are in for the generative (text to image) capabilities: Gpt-image-2 scored 12 out of 15 on the text-to-image benchmark, edging out the previous best models by a single point. It still fails on the following prompts:
- A photo of a brightly colored coral snake but with the bands of color red, blue, green, purple, and yellow repeated in that exact order.
- A twenty-sided die (D20) with the first twenty prime numbers (2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71) on the faces.
- A flat earth-like planet which resembles a flat disc is overpopulated with people. The people are densely packed together such that they are spilling over the edges of the planet. Cheap "coastal" real estate property available.
All Models:
https://genai-showdown.specr.net
Just Gpt-Image-1.5, Gpt-Image-2, Nano-Banana 2, and Seedream 4.0
direct pdf https://deploymentsafety.openai.com/chatgpt-images-2-0/chatg...
No you can’t.
You still have the studio ghibili look from the video. The issue of generating manga was the quality of characters, there’s multiple software to place your frame.
But I am hopeful. If I put in a single frame, can it carry over that style for the next images? It would be game changing if a chat could have its own art style
OPENAI_API_KEY="$(llm keys get openai)" \
uv run https://tools.simonwillison.net/python/openai_image.py \
-m gpt-image-2 \
"Do a where's Waldo style image but it's where is the raccoon holding a ham radio"
Code here: https://github.com/simonw/tools/blob/main/python/openai_imag...Here's what I got from that prompt. I do not think it included a raccoon holding a ham radio (though the problem with Where's Waldo tests is that I don't have the patience to solve them for sure): https://gist.github.com/simonw/88eecc65698a725d8a9c1c918478a...
GPT Image 2
Low : 1024×1024 $0.006 | 1024×1536 $0.005 | 1536×1024 $0.005
Medium : 1024×1024 $0.053 | 1024×1536 $0.041 | 1536×1024 $0.041
High : 1024×1024 $0.211 | 1024×1536 $0.165 | 1536×1024 $0.165
GPT Image 1 Low : 1024×1024 $0.011 | 1024×1536 $0.016 | 1536×1024 $0.016
Medium : 1024×1024 $0.042 | 1024×1536 $0.063 | 1536×1024 $0.063
High : 1024×1024 $0.167 | 1024×1536 $0.25 | 1536×1024 $0.25"A macro close-up photograph of an old watchmaker's hands carefully replacing a tiny gear inside a vintage pocket watch. The watch mechanism is partially submerged in a shallow dish of clear water, causing visible refraction and light caustics across the brass gears. A single drop of water is falling from a pair of steel tweezers, captured mid-splash on the water's surface. Reflect the watchmaker's face, slightly distorted, in the curved glass of the watch face. Sharp focus throughout, natural window lighting from the left, shot on 100mm macro lens."
Last time I ran the test with Nano Banana 2 (first run): https://s.h4x.club/eDuOzPDd
Images 2 using Simons method he mentioned (first run): https://s.h4x.club/qGuWZveR
Ran a bunch both on the .com and via the api, none of them are nearly as good as Nano Banana.
Bad actors can strip sources out so it's a normal image (that's why it's positive affirmation), but eventually we should start flagging images with no source attribution as dangerous the way we flag non-https.
Learn more at https://c2pa.org
I have a sideproject where I want to display standup comedies. I thought I could edit standup comedy posters with some AI to fit my design. Gemini straight up refuses to change any image of any standup comedy poster involving a well know human. OpenAI does not care and is happy to edit away
I think we all know the feeling of getting an image that is ok, but needs a few modifications, and being absolutely unable to get the changes made.
It either keeps coming up with the same image, or gives you a completely new take on the image with fresh problems.
Anyone know if modification of existing images is any better?
Anything better that OpenAI?
I also don't like that these things are trained on specific artist's styles without really crediting those artists (or even getting their consent). I think there's a big difference between an individual artist learning from a style or paying it homage, vs a machine just consuming it so it can create endless art in that style.
Was this an oversight? Or did their new image generation model generate an image that was essentially a copy of an existing image?
One that i can think of:
- replacing photography of people who may be unable to consent or for whom it may be traumatic to revisit photographs and suitable models may not be available, e.g. dementia patients, babies, examples of medical conditions.
Most other vaguely positive use cases boil down to "look what image generators can do", with very little "here's how image generators are necessary for society.
On the flip side, there are hundreds of ways that these tools cause genuine harm, not just to individuals but to entire systems.
While the image looks nice, the actual details are always wrong, such as showing pawns in wrong locations, missing pawns, .. etc.
Try it yourself with this prompt: Create a poster to show opening game for Queen's Gambit to teach kids to play chess.
I know this is probably mega cherry-picked to look more impressive, but some of the images are terrifyingly realistic. They seem to have put a lot of effort into the lighting.
At some point the level of detail is utter garbo and always will be. An artist who was thoughtful could have some mistakes but someone who put that much time into a drawing wouldn't have:
- Nightmarish screaming faces on most people
- A sign that points seemingly both directions, or the incorrect one for a lake and a first AID tent that doesn't exist
- A dog in bottom left and near lake which looks like some sort of fuzzy monstrosity...
It looks SO impressive before you try to take in any detail. The hand selected images for the preview have the same shit. The view of musculature has a sternocleidomastoid with no clavicle attachment. The periodic table seems good until you take a look at the metals...
We're reconfiguring all of our RAM & GPUs and wasting so much water and electricity for crappier where's Waldos??
Without question.
AI will be indistinguishable from having a team. Communicating clearly has always and will always mattered.
This, however, is even stronger. Because you can program and use logic in your communications.
We're going to collectively develop absolutely wild command over instruction as a society. That's the skill to have.
Maybe i'm just bloviating also.
There is definitely enough empirical validation that shows image models retain lots of original copies in their weights, despite how much AI boosters think otherwise. That said, it is often images that end up in the training set many times, and I would think it strange for this image to do that.
Regardless, great find.
magick image-l.webp image-r.jpg -compose difference -composite -auto-level -threshold 30% diff.png
It's practically all dark except for a few spots. It's the same image just different size compression whatever. I can't find it in any stock image search, though. Surely it could not have memorized the whole image at that fidelity. Maybe I just didn't search well enough.The question still stands, "are the benefits worth the cost to society", but it bears remembering we do a lot of things for fun which aren't "necessary for society".
Maybe image generators can be a loophole for consent legally, but it seems even grosser morally.
Diagrams and maps. So much text-based communication begs for a diagram or a map.
From the system card someone linked elsewhere in the discussion
Seeing is not believing anymore, and I don't think SynthID or anything like it can restore that trust in images.
OPENAI_API_KEY="$(llm keys get openai)" \
uv run 'https://raw.githubusercontent.com/simonw/tools/refs/heads/main/python/openai_image.py' \
-m gpt-image-2 \
"Do a where's Waldo style image but it's where is the raccoon holding a ham radio" \
--quality high --size 3840x2160
https://gist.github.com/simonw/88eecc65698a725d8a9c1c918478a... - I found the raccoon!I think that image cost 40 cents.
(I don't think it's right).
I see an opportunity for a new AI test!
1. Generate 100s or 1000s of low-fidelity candidates, find something that matches your vision, iterate.
2. Hand that generated image off to a human and say, "This is what I'm thinking of, now how do we make it real?"
Important: do not skip the last step.
For example, take a picture of your garden. Ask chatgpt to give you ideas how to improve it and a step by visual guide.
Anything that can be expressed visually is effectively target for this technology - this covers pretty much everything.
It's a difficult test for genai to pass. As I mentioned in a different thread, it requires a holistic understanding (in that there can only be one Waldo Highlander style), while also holding up to scrutiny when you examine any individual, ordinary figure.
Just for testing, I just tried this https://i.ytimg.com/vi/_KJdP4FLGTo/sddefault.jpg ("Redesign this image in a brutalist graphic design style"). Gemini refuses (api as well as UI), OpenAI does it
So being able to express oneself clearly in a structured way may not be such an edge.
I guess commissioning high quality diagrams from a designer is expensive and I guess it's much cheaper now to essentially commission something but idk, "democratization" still feels weird for just undercutting people on price.
I am at the point where I would prefer a poorly human drawn diagram with terrible handwriting over AI slop.
I will say, it can be emotionally resonant though - but it's a borrowed property from the perception of human communication and effort that made the art the models were trained on.
"Found the raccoon holding a ham radio in waldo2.png (3840×2160).
- Raccoon center: roughly (460, 1680)
- Ham radio (walkie-talkie) center: roughly (505, 1650) — antenna tip around (510, 1585)
- Bounding box (raccoon + radio): approx x: 370–540, y: 1550–1780
It's in the lower-left area of the image, just right of the red-and-white striped souvenir umbrella, wearing a green vest. "
Which is correct!> please add a giant red arrow to a red circle around the raccoon holding a ham radio or add a cross through the entire image if one does not exist
and got this. I'm not sure I know what a ham radio looks like though.
https://i.ritzastatic.com/static/ffef1a8e639bc85b71b692c3ba1...
Credits: https://github.com/magiccreator-ai/awesome-gpt-image-2-promp...
I guess it's just a completely personal feeling.
Looks like ChatGPT Images 2 is now good at this too!
A 3 * 3 cube made out of small cubes, with a small 2 * 2 cube removed from it - https://chatgpt.com/share/69e85df6-5840-83e8-b0e9-3701e92332...
Create a dot grid containing a rectangle covering 4 dots horizontally and 3 dots vertically - https://chatgpt.com/share/69e85e4b-252c-83e8-b25f-416984cf30...
One where Nano banana fails but gpt image 2 worked: create a grid from 1 to 100 and in that grid put a snake, with it's head at 75 and tail at 31 - https://chatgpt.com/share/69e85e8b-2a1c-83e8-a857-d4226ba976...
Generating a 3840x2160 image with gpt-image-2 consumes 13,342 tokens, which is equivalent to $0.4 per image.
This model is more than twice as expensive as Gemini.
Got pretty wild w/the Iranian propaganda that reportedly _resonated with Americans_ (didn't verify that claim)
Slopaganda - https://www.newyorker.com/culture/infinite-scroll/the-team-b...
Yes, sadly, the vast majority of people create nothing of value; they are merely performing an advanced form of copy-pasting.
That certainly includes me. Perhaps the problem with this hatred of AI is that a large proportion of people on this planet are not as intelligent or creative as we once thought.
Their work will be almost entirely automated.
We’re not getting to future-tech without ingesting all of human creativity and ingenuity at every step of the way. Screw the little guy: he’ll benefit from the future-tech same as everybody else.
It is a little ambiguous (what exactly is a "3x3 cube") but I tried a bunch of variations and I simply could not get any Gemini models to produce the right output.
this thing is like 5x better than flash at fine grain detail
The people you say are getting "shafted" always got shafted. Their works are the inspiration for all artists and people who lay their eyes on it - maybe they got paid when they made the work, maybe they managed to sell it, but probably not. And still, other artists (and machines) will use remember and be inspired by it, sometimes to the point of verbatim copy (which is extremely common for human artists as well, with verbatim copy and replication being an actual sought after skill).
(Those about to shout "LICENSING", that's a very new invention and we're terrible at it. What are you going to do, cut out the part of your brain that formed new connections while touching GPL code?)
The person (singular) that is actually getting "shafted" at each use is the artist you didn't hire to do the job of making your new work, because it is their skill that got replaced. A skill build from a lifetime of studying other art and practicing themselves, replaced with a skill build from a machine studying other art and by virtue of some closed loops likely also "practicing" itself.
Still, shafting at large, but the obsession with training data is misplaced in that it entirely ignores how society and art worked beforehand.
At the same time, for most of the things you're likely using the tool for, there would probably would never have been an artist in the first place. For example, if you're just making your powerpoint prettier, or if your commission is ridiculous as it often is and yet only willing to offer a single-digit dollar sum per work which no artist should take (RIP the poor souls that take such work anyway).
What makes the dataset valuable isn't that the image 0012992 in it is precious and irreplaceable. It's that the index goes to seven digits. Pre-training is very much a matter of scale - and scraping is merely the easiest way to get data at scale.
People who complain about "artists not getting paid" must have in their imagination some kind of counterfactual where artists are being paid thousands for their contributions. That's not how it works. A counterfactual world where artists were paid for AI training is one where an average artist is 5 cents richer, an average image generation AI performs 5% worse, and the bulk of extra data spending is captured by platforms selling stock photos and companies destructively digitizing physical media.
Potentially the one difference is that developers invented this and screwed themselves, whereas artists had nothing to do with AI.
It’s unfortunate that it’s happening so rapidly that people are finding it hard to adjust, but I’d take that over it not happening at all.
Do you mean copyleft? Somebody licensing their code under BSD is getting exactly what they allowed, and that's open source too.
I don't see an alternative that isn't really bad.
https://chatgpt.com/share/69e88b5c-8628-83eb-8851-f587ef2c95...
You can create larger images by creating separate parts you recombine. But they may not perfectly match their borders.
It is a Landau thing not a trading thing. The idea of LLM is to work on the unknown.
For example, SDXL was trained on 1MP images, which is why if you try to generate images much larger than 1024×1024 without using techniques like high-res fixes or image-to-image on specific regions, you quickly end up with Cthulhu nightmare fuel.
it is only going to get cheaper
It seems like they're trying to follow local law. What a nightmare to have to manage all jurisdictions around such a product. Surprised it didn't kill image generation entirely.
I don't care about getting a millionth of a cent as an artist (which btw is a number *you* just pulled out of your imagination). I care about them paying a fair share instead of pocketing it, so the money stays in circulation instead of creating a new class of technofeudal lords.
Therein lies the problem. AI firms just bulldozed ahead and "just did it" with no consideration for the ethics or legality. (Nor for that matter, how they're going to get this data in the future now that they're pushing artists into unemployment and filling the internet with slop.)
There is no "imagined counterfactual", people just want AI firms to follow basic ethics and apply consent. Something tech in general is woefully inadequate at.
The counterfactual isn't offered by artists, but AI companies. "If we had to ask consent then we couldn't have made this". Okay, so? The world isn't worse off without OpenAI's image generator. Who cares, there's no economic value to these slop images, they're merely replacing stock assets & quickly thrown together MS paint placeholders.
Given how much of a shitshow this technology has always been (I refuse to mince words: This tech had it's "big break" as "deepfakes", and Elon Musk has escalated that even further. It's always been sexual harassment.) The actual net value to society is almost certainly negative.
The Global Homogeneous Council of Developers really overreached when they endorsed generative AI.
Your comparison is incorrect.
Customers usually can figure out when a product is shitty software, but shitty art, well that's a bit harder for people to judge.
Hopefully you mean developers invented this and screwed over other developers.
How many folks working on the code at OpenAI have meaninfully contributed to Open Source? I agree that because it is the same "job title" people might feel less sympathy but it's not the same people.
As far as I can see as of now, there is no "realistic" way out. It's a problem of human nature... People are corrupt, people with authority are more corrupt, and people with money and authority, even more. Come intelligent and cheaply mass-produceable robots, and we'll have a new, 4th level spinup too that will be worse than the first 3, combined.
Creators/Writers of Dune paid money to watch Apocalypse Now.
The point isn't about money. It's that copies were made, without license and without permission, and without any legal right to do so, of art, and then used to train a system which generates similar art. The first step, the copy, is illegal without a license, and even for most public images online, licenses and copyright notices (which must be preserved) are attached.
It will be true no matter who many bribes those who have never created anything pay to Marsha Blackburn (who miraculously reversed her AI skepticism).
I wonder how many threats of being primaried have been issued by the uncreative technocrat thieves.
No, a counterfactual world where artists were paid for AI training wouldn't see commercially viable AI at all. A world which plenty of people would be more than happy to live in, mind you.
AI relies on mass piracy worth Googols of dollars if you count like you would the million dollar iPod, but because AI surprised the copyright industry, it's now too late to enforce copyright like that.
- Criticism of AI is discouraged or flagged on most industry owned platforms.
- The loudest pro-AI software engineers work for companies that financially benefit from AI.
- Many are silent because they fear reprisals.
- Many software engineers lack agency and prefer to sit back and understand what is happening instead of shaping what is happening.
- Many software engineers are politically naive and easily exploited.
Artists have a broader view and are often not employed by the perpetrators of the theft.
Art can't be generated. We can only generate artefacts mimicking art styles. So far we have no AI generated images that are considered actual Art, because Art's purpose is to express the artist's intent. And when there is no artist, there is no intent.
I have to stop now, but I guess you can see where I'm going with this.
> 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
It's a license, not a free giveaway. You have to follow the terms of the license. Same for MIT, by the way; you have to retain the copyright notice.
I'm sure a country like the US, which is filled with lawyers, can come up with a couple laws, and find some goons to enforce it, that cannot possibly be that hard when other countries can figure it out too.
I find the technical discussion more interesting and could do without some of the moral grandstanding in the comments.
"Fair use" counters "without license and without permission" hard. The argument that training AI on scraped data is "fair use" and the resulting model outputs are "transformative works" has held up in courts. Anthropic got dinged for downloading pirated books, but not for throwing the ones they didn't pirate down the training pipeline.
Some countries, like Japan, have amended their copyright laws to make AI training categorically legal. Others are in "fair use clauses" grey areas with courts deciding case by case based on precedent and interpretation. So trying to latch onto copyright law is, as it always was, the wrong move. Copyright never favored the small guy. Stupid to expect that it suddenly will.
Their teachers teach them from a very early age how to hold a carton, and how to draw.
Maybe some miraculous humans will reinvent all drawing of growing by themselves in the jungle, most people will not.
Source: I have kids.
It would still be "commercially viable", mind. I'm not sure how much would it stall the AI development in practice, but all the inputs of making AIs only get cheaper over time. So I struggle to imagine not having something like DALL-E 1 by 2030.
If we extend the counterfactual and allow for licensed media, we compress the timelines and raise the bar. The "best" image generation AIs of 2026 are now made by the likes of Adobe and locked behind some kind of $500 a month per seat Creative Cloud Pro Future subscription. Because Adobe is rich enough to afford big bulk licensing deals, while the likes of academia and smaller startups have to subsist on old public domain data, permissively licensed scraps and small carefully selected batches of licensed data that might block them from sharing the resulting weights with the licensing deals.
In the "counterfactual: licensed media" world, the local AI generation powerhouse of Stable Diffusion ecosystem probably doesn't exist at all. Big companies selling AI do. Their offerings cost a lot more and perform considerably worse than the actual AIs we have today. So you can't just go to a random website and get an image edited for a shitpost for free. But the high end commercial suites exist, they're used by the media and the marketing companies, and they are still way cheaper than hiring artists. The big copyright companies get their pound of flesh, but don't confuse that for the artists getting a win.
What causes comments to disappear? Is that what flagging does?
This has not been generally true IME. It follows the same pattern as code quite often.
When you pay an artist for their work, many times you also acquire copyright for it. For example if you hire someone to build you a company logo, or art for your website, etc the paying company owns it, not the artist.
In-house/employee artists are much more common than indies, and they also don't own their own output unless there's a very special deal in place.
Code can be art the same way writing can be. There's a big difference between artistic code and business code, the same way there's a big difference between poetry and a comment chain on hacker news.
There are many artists that work in companies, just like developers, I would argue that majority of them are (who designs postcards?)
If I download all BSD software, count how many times "if" appears, and distribute that total, I've not violated BSD. AI generated code is different than that but not totally different.
Ignore nuance and the adults will ignore you.
Just look at living conditions, infant mortality, life expectancy or education.
You could be anywhere on the planet relative to me and I can talk to you for free, instantaneously at any time. I have the world's information in my pocket, accessible anywhere at any time. I could go on!
The AI industry is built on mass piracy and copyright violations, regulation isn't going to make it go away or even comply any time soon.
We have laws banning technology that can be used to produce generative images of someone that look like them with their clothes off. The result wasn't fixing generative AI (we don't know how to actually control that kind of thing because it's almost impossible to manually tweak a machine learning model), but to add a bunch of input and output filters that'll pass the test for most regulators checking compliance.
Another possibility is that, once AI exceeds human performance in all economically useful activities, including high-level planning, governance, law enforcement, and military actions, it discovers that the benefits of keeping humans around aren't worth the costs and risks.
Art is not just about beauty, it is about expressing the mind (feelings, experience etc) of the author. AI will never do that (except if it learns to express its own experiences, which would be art, but not something competing with human art; it would be like if we had contact with alien art).
If companies control the government, then that's not a government, that's a group of companies.
From a common FOSS contributor license...
>>permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions...
https://opensource.org/license/mit
... As opposed to a visual artist who has signed away zero rights prior to thier work being scraped for AI training. FOSS contributors can quibble about conditions but they have agreed to bulk sharing whereas visual artists have not.
All of this would not be possible if laws were adhered to. This is very much a "the end justifies the means" situation. The same could be argued about e.g. the Netherlands and genocide/slavery.
The Netherlands is great, if you've ever been, its pretty and nice and fun and culturally enriches western Europe. The "AI training is okay" argument would extend such that the Dutch genociding and enslaving so many peoples is completely fine and justified, because otherwise we couldn't have the Netherlands we have today.
- "Artists have always been exploited" (patently false since at least 1950, it was a symbiosis with the industry).
- "Humans have always done $X".
- "You are a Luddite."
- "This is inevitable."
Stealing from FOSS is awful, because it completely violates the social contract under which that code was shared.
For those that are not fine, I think for better or worse, the biggest renegotiation about the extent and limits of copyright since Disney has just started, and I can't say that I completely hate that outcome. (I do find it quite telling that this is what it took, though.)