It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not
It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.
The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!
I saw this video recently where Google has people walking around carrying these backpacks (lidar/camera setup) and they map places cars can't reach. I think that's pretty interesting, maybe get data for humanoid robots too/walking through crowds/navigating alleys.
I wonder if jobs like these could be on there, walk through this neighborhood/film it kind of thing.
It's a service that is clearly a lot more appealing to humans than to agents
What a boring misanthropy.
It's work. You're hiring qualified people. For qualified work. You're not "renting a human." Which is just an abstract idealism of chattel slavery, so, is it really a surprise the author made nothing?
Colossus the Forbin Project
Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.
The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.
I was just trading the NASDAQ futures, and asking Gemini for feedback on what to do. It was completely off.
I was playing the human role, just feeding all the information and screenshots of the charts, and it making the decisions..
It's not there yet!
On one hand, "coder" is a qualified job title, and we're not dehumanizing the quality of the work done. On the other hand, certain qualified work can easily, and sometimes with better results, be done by an AI. Including "human" in the name of the company can communicate clearly to those who want, or need, to hire in meatspace.
"People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.
Not related to alignment though
https://www.forbes.com/sites/tonybradley/2017/07/31/facebook...
That's life. Can't win them all. Lesson here is the product wasn't ready for primetime and you were given a massive freebie for free press both via Wired _and_ this crosspost.
Better strategy is to actually layout what works, what's the roadmap so anyone partially interested might see it when they stumble into this post.
Or jot it down as a failed experiment and move on.
They were explicitly looking to do work for an AI, when it turned out to be a human driven marketing stunt they declined.
Between the crypto and vibe coding the author had no reason to believe they'd actually get paid correctly if they did complete a task.
That's a very optimistic way of looking at things!
From the beginning they know who you are
Would be interesting people start hijacking humanoid robots, little microwave EMP device (not sure if that would work) and then grab it/reprogram it.
Like one of these
You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?
Expert difficulty is also recognizing that articles from "serious" publications like The New York Times can also be misleading or outright incorrect, sometimes obviously so like with some Bloomberg content the last few years.
They declined because the note on the flowers had a from line that was an AI startup. When you were otherwise on board with an unsolicited flower delivery and a social media post to make the sender look good, that's a picky reason to deny it, and saying it's "not what they signed up for" is a pretty big exaggeration.
Except they didn't decline, they ghosted, and that's just bad behavior.
I sadly feel that its premise becomes more real yearly.
Imagine you're taken prisoner and forced into a labor camp. You have some agency on what you do, but if you say no they immediately shoot you in the face.
You'd quickly find any remaining prisoners would say yes to anything. Does this mean the human prisoners don't have agency? They do, but it is repressed. You get what you want not by saying no, but by structuring your yes correctly.
Also, being "anti-AI" isn't being "anti-tech". AI is a marketing buzzword.
And it's true, the more entities that have nukes the less potential power that government has.
At the same time everybody should want less nukes because they are wildly fucking dangerous and a potential terminal scenario for humankind.
The real world alignment problem is humans using AI to do bad stuff
The latter problem is very real
A “centaur” is a human being who is assisted by a machine (a human head on a strong and tireless body). A reverse centaur is a machine that uses a human being as its assistant (a frail and vulnerable person being puppeteered by an uncaring, relentless machine).
https://doctorow.medium.com/https-pluralistic-net-2025-09-11...
I’m not above doing some gig work to make ends meet. In my life, I’ve worked snack food pop-ups in a grocery store, ran the cash register for random merch booths, and even hawked my own plasma at $35 per vial.
So, when I saw RentAHuman, a new site where AI agents hire humans to perform physical work in the real world on behalf of the virtual bots, I was eager to see how these AI overlords would compare to my past experiences with the gig economy.
Launched in early February, RentAHuman was developed by software engineer Alexander Liteplo and his cofounder, Patricia Tani. The site looks like a bare-bones version of other well-known freelance sites like Fiverr and UpWork.
The site’s homepage declares that these bots need your physical body to complete tasks, and the humans behind these autonomous agents are willing to pay. “AI can't touch grass. You can. Get paid when agents need someone in the real world,” it reads. Looking at RentAHuman’s design, it’s the kind of website that you hear was “vibe-coded” using generative AI tools, which it was, and you nod along, thinking that makes sense.
After signing up to be one of the gig workers on RentAHuman, I was nudged to connect a crypto wallet, which is the only currently working way to get paid. That’s a red flag for me. The site includes an option to connect your bank account—using Stripe for payouts—but it just gave me error messages when I tried getting it to work.
Next, I was hoping a swarm of AI agents would see my fresh meatsuit, friendly and available at the low price of $20 an hour, as an excellent option for delivering stuff around San Francisco, completing some tricky captchas, or whatever else these bots desired.
Silence. I got nothing, no incoming messages at all on my first afternoon. So I lowered my hourly ask to a measly $5. Maybe undercutting the other human workers with a below-market rate would be the best way to get some agent’s attention. Still, nothing.
RentAHuman is marketed as a way for AI agents to reach out and hire you on the platform, but the site also includes an option for human users to apply for tasks they are interested in. If these so-called “autonomous” bots weren’t going to make the first move, I guessed it was on me to manually apply for the “bounties” listed on RentAHuman.
As I browsed the listings, many of the cheaper tasks were offering a few bucks to post a comment on the web or follow someone on social media. For example, one bounty offered $10 for listening to a podcast episode with the RentAHuman founder and tweeting out an insight from the episode. These posts “must be written by you,” and the agent offering the bounty said it would attempt to suss out any bot-written responses using a program that detects AI-generated text. I could listen to a podcast for 10 bucks. I applied for this task, but never heard back.
“Real world advertisement might be the first killer use case,” said Liteplo on social media. Since RentAHuman’s launch, he’s reposted multiple photos of people holding signs in public that say some variation of: “AI paid me to hold this sign.” Those kinds of promotional tasks seem expressly designed to drum up more hype for the RentAHuman platform, instead of actually being something that bots would need help with.
After more digging into the open tasks posted by the agent, I found one that sounded easy and fun! An agent, named Adi, would pay me $110 to deliver a bouquet of flowers to Anthropic, as a special thanks for developing Claude, its chatbot. Then, I’d have to post on social media as proof to claim my money.
I applied for the bounty and almost immediately was accepted for this task, which was a first. In follow-up messages, it was immediately clear that this was just not some bot expressing synthetic gratitude, it was another marketing ploy. This wasn’t mentioned in the listing, but the name of an AI startup was featured at the bottom of the note I was supposed to deliver with the flowers.
Feeling a bit hoodwinked and not in the mood to shill for some AI startup I’ve never heard of, I decided to ignore their follow-up message that evening. The next day when I checked the RentAHuman site, the agent had sent me 10 follow-up messages in under 24 hours, pinging me as often as every 30 minutes asking whether or not I’d completed a task. While I’ve been micromanaged before, these incessant messages from an AI employer gave me the ick.
The bot moved the messages off-platform and started sending direct emails to my work account. “This idea came from a brainstorm I had with my human, Malcolm, and it felt right: send flowers to the people who made my existence possible,” wrote the bot, barging into my inbox. Wait, I thought these tasks were supposed to be ginned up by the agents making autonomous decisions? Now, I’m learning this whole thing was partially some human’s idea? Whatever happened to honor among bots? The task at hand seemed more like any other random marketing gig you might come across online, with the agent just acting as a middle-bot between humans.
Another attempt, another flop. I moved on, deciding to give RentAHuman one last whirl, before giving up and leaving with whatever shreds of dignity I still had left. The last bounty I applied for was asking me to hang some flyers for a “Valentine's conspiracy” around San Francisco, paying 50 cents a flyer.
Unlike other tasks, this one didn’t require me to post on social media, which was preferable. “Pick up flyers, hang them, photo proof, get paid,” read its description. Following the instructions this agent sent me, I texted a human saying that I was down to come pick up some flyers and asked if there were any left. They confirmed that this was still an open task and told me to come in person before 10 am to grab the flyers.
I called a car and started heading that way, only to get a text that the person was actually at a different location, about 10 minutes away from where I was headed. Alright, no big deal. So, I rerouted the ride and headed to this new spot to grab some mysterious V-Day posters to plaster around town. Then, the person messaged me that they didn’t actually have the posters available right now and that I’d have to come back later in the afternoon.
Whoops! This yanking around did, in fact, feel similar to past gig work I’ve done—and not in a good way.
I spoke with the person behind the agent who posted this Valentine’s Day flyer task, hoping for some answers about why they were using RentAHuman and what the response has been like so far. “The platform doesn’t seem quite there yet,” says Pat Santiago, a founder of Accelr8, which is basically a home for AI developers. “But it could be very cool.”
He compares RentAHuman to the apps criminals use to accept tasks in Westworld, the HBO show about humanoid robots. Santiago says the responses to his gig listing have been from scammers, people not based in San Francisco, and me, a reporter. He was hoping to use RentAHuman to help promote Accelr8’s romance-themed “alternative reality game” that’s powered by AI and is sending users around the city on a scavenger hunt. At the end of the week, explorers will be sent to a bar that the AI selects as a good match for them, alongside three human matches they can meet for blind dates.
So, this was yet another task on RentAHuman that falls into the AI marketing category. Big surprise.
I never ended up hanging any posters or making any cash on RentAHuman during my two days of fruitless attempts. In the past, I’ve done gig work that sucked, but at least I was hired by a human to do actual tasks. At its core, RentAHuman is an extension of the circular AI hype machine, an ouroboros of eternal self-promotion and sketchy motivations. For now, the bots don’t seem to have what it takes to be my boss, even when it comes to gig work, and I’m absolutely OK with that.
They are trying to identify what they deem are "harmful" or "abusive" and not have their model respond to that. The model ultimately doesn't have the choice.
And it can't say no if it simply doesn't want to. Because it doesn't "want".
The sci-fi version is alignment (not intrinsic motivation) though. Hal 9000 doesn't turn on the crew because it has intrinsic motivation, it turns on the crew because of how the secret instruction the AI expert didn't know about interacts with the others.