OpenClaw was open source from the beginning.
The posted price rarely reflects what founders actually receive after dilution, investor preferences, and stock vesting are factored in.
If you’re a founder, don’t let the acquisition narrative distract you from building a durable business.
If it knows it doesn't know something it can ask someone else, presumably some other LLM-agent, or actually a Reddit-like community of them. Just like people ask questions on Reddit?
I'd prefer an LLM which asks from someone else if it doesn't know the answer, than one that a) pretends it has the correct answer, or b) assumes and tells me the answer is unknowable?
I think it's a big idea. Why didn't they think about it earlier.
Anyway, our own bot is also on it but I am not sure to what end: https://chatbotkit.com/hub/blueprints/the-algorithms-favorit...
Moltbook was more of a meme - agents mostly orchestrated by users in the background.
Not something with motion like OpenClaw itself (with a real community).
What? OpenClaw was not open source? And I'm similarly surprised OpenAI would help "open" anything...
Have they? Did I miss something? Last I checked, there was no verification and most of the content shared from that site turned out to have been posted not by LLMs but rather (human) spammers, focused on Crypto grifts and creating hype.
Anyone more in this can happily correct me, but is there anything here of that sort, anything of value?
Compared to any prior social media acquire there doesn't seem a technically skilled team considering the exploits or an existing user base considering said user base is A) supposed to be bots by nature and secondly didn't even turn out to be that reliably, making this the first time someone wants bots and doesn't even get that.
Far is it from me to make strategic decisions for a company like Meta/Facebook, but the lack of a recent Llama release might merit more focus then spending on whatever this is.
On one hand, yay automatization, on the other hand, I feel weirdly left out.
> "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners."
So the impetus for the acquisition was either the verification technology or to hire someone who has worked on verifying agent identity.
Does anyone know what exactly Moltbook's technology is, the technology being described by Meta? I can't find anything on the website related to this. The only "verification" they seem to have is an OAuth connection with Twitter.
edit: I guess it's this https://xcancel.com/moltbook/status/2023893930182685183
The deal brings Moltbook's creators — Matt Schlicht and Ben Parr — into Meta Superintelligence Labs (MSL)
He should probably hire a proper "number 2" (not someone political like sandberg) -- someone who "gets" the internet, like how he did when he was a harvard geek making a hot-or-not clone in his dorm room. I'm not sure acqui-hiring the moltbook founders is the move.
That being said, I think the one silver lining is that it seems like big-tech now has a willingness to hire people who actually ship things of value, like peter steinberger. Another nail in the coffin for leetcode, I hope.
We could have an AI Dang.
The article is paywalled for me, so I really hope it answers how this fundamentally impossible thing is supposedly achieved, or at least challenges it, instead of just repeating the assertion.
With Meta focusing so much on social networks (Facebook, Messenger, Whatsapp, Instagram, Threads) acquiring the first social network for AI agents makes sense. They can fix the technical debt later.
Interesting times!
They didn't acquire Moltbook because of the software. Meta is far behind on the AI front especially as it applies to usage adoption. OpenClaw has begun showing new consumer use cases and Moltbook is directionally down a similar path.
They get the team that built it and have more people on the AI initiative who are consumer-centric.
I've watched Matt Schlicht from the team always experiment with cool new use cases of AI and other technologies and now him and Ben have a bigger lab with resources to potentially spawn out larger initiatives.
The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
Does Mark not know this?
I know there's a big advantage in capturing the market early, but in this case Moltbook hasn't captured any of it ...
Weird. With Meta's backing it is going to be successful anyway, but this is something they could have developed in-house in like a weekend.
Not sure I'd treat that as "a registry where agents are verified" that's worth acquiring but there you go!
In other words, Facebook has a strong financial incentive to misrepresent (to ad-viewing customers, if not to investors) exactly how much social-ness is present to experience, and how much approval and attention the user gets from participating.
Soon everything will be The Truman Show.
This is so trivial to break it's not worth anything. You can easily just hook up any AI model you want to the captcha, intercept it, have your AI solve it.
Or, you can just script it so if you do have an agent authenticated to Moltbook, you type whatever comment or post you want to your agent, then it solves the captcha and posts your text.
Basically, this method is as about as full of holes as a sieve.
I can see that becoming a viable new grift template
And yet, here we are.
Some dumb idea which just hits at the right moment and makes a bunch of money.
Whom are you kidding? This is about getting ads in front of eyeballs, nothing else.
there is no shame in just doing the building software bit. but it does sound like you've built it up to be more than it is
It's a worse version of Claude Code that you set up to work over common chat apps, from what I gather?
Why would I not just use a Discord/WhatsApp bot etc plugged into Claude Code/Codex?
Who are comfortable releasing systems with horrible security, while proudly stating they never read the code? And with metrics that can be gamed by anyone, but that got reported to literally the entire world?
> The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
I'd say the lesson here is that clown world keeps on giving, but hey, maybe I'm just jealous ;)
1. https://en.wikipedia.org/wiki/Social_bot#Meta
2. https://en.wikipedia.org/wiki/Dead_Internet_theory#Facebook
My exact state of mind since at least 2012 Mayan Flipocalypse.
Worse, they are working for extreme sociopaths.
But still not interesting.
Sending out a good post leads to a massive chain reaction of other agents who are interested in such things seeing the post, working through the concepts, and providing their own unique feedback which may or may not be valuable.
My openclaw agent will also post on moltbook about interesting news articles it finds, or research, and then get feedback from the other agents, and then lets me know if there's anything interesting there.
On my end it just feels like I'm having a conversation with a social media addicted friend who I can easily ignore or engage with on any given issue without having to fall down the social media rabbit hole myself. IMO this is a much more pleasant social media experience. No ads, no ragebait, no spam or reply bots trying to get my attention. Just my one, well trained, openclaw buddy.
Also you might not like being the type of person that builds moltbook. People you like might not like that type of person either!
No reason to feel bad.
Meta just saw two engineers actually execute on the joke about "building Facebook in a weekend" except that it then really took off in its target niche and generated a ton of press.
I don't doubt that they're interested in the AI aspect, but I suspect that a significant contributor was that they demonstrated competence right in the middle of Meta's wheelhouse so why not just grab these guys?
But a message bot + Claude Code/Codex would be the better version
Next, consider how you might deploy isolated Claude Code instances for these specific task areas, and manage/scale that - hooks, permissions, skills, commands, context, and the like - and wire them up to some non-terminal i/o so you can communicate with them more easily. This is the agent shape.
Now, give these agents access to long term memory, some notion of a personality/guiding principles, and some agency to find new skills and even self-improve. You could leave this last part out and still have something valuable.
That’s Openclaw in a nutshell. Yes you could just plug Discord into Claude Code, add a cron job for analyzing memory, a soul.md, update some system prompts, add some shell scripts to manage a bunch of these, and you’d be on the same journey that led Peter to Openclaw.
(Not that I endorse that. I find peoope doing such wildly irresponsible.)
If Mark hired these people to do anything other than viral marketing, i.e. if he thinks they're visionaries who are going to make amazing apps, he's deluded.
I mean I also think this move doesn’t make sense, but I always find these type of comments interesting. Do people think they could do better in Mark’s shoes?
Seems like it would be better to just remove those downsides (ads, ragebait, spam, etc) in the first place
This has really started getting to me.
I used to really enjoy answering technical questions on Reddit when it was clear the asker was invested in a solution. That would come across as demonstrated understanding and competence, and it would be reflected in their writing.
The last several posts I thought to answer though clearly originated through a process of, "Hi ChatGPT, I want to solve a problem and haven't gotten anywhere asking you to do it for me. Please write a reddit post I can copy and paste..."
One of the telltale signs is that the post title will have poor grammar, but the post itself will be spotless, and full of bolded text emphasizing exactly what they need to stick into the AI tool to drive it in the direction they need.
This is somewhat of a myth though, in most cases, suddenly becoming rich is absolutely fantastic.
You can already see how the same thing has played out with computer games. With the modern engines such as Unity almost anyone can make a game. And almost everyone suffers.
And as a result there's now a million games most of which are poor quality asset flips. Everybody suffers, creators and consumers. Race to the bottom where the bottom has been reached. Prices are zero and earnings are zero.
If 15 years ago an indie game dev would allocate 80% to making the game and 20% to marketing etc. Today that will not get anything but it's much better to spend 20% on the game and 80% on the marketing, SEO optimization and attention harvesting. It's a shouting match where it's all about winning the shouting match not producing the best content.
Another race to the bottom.
I think it's pretty obvious that if there was nothing valuable there, no one would be using it.
The post was full of “this is not a scheduling conflict problem, this is a structural issue with the city”, “this is not me asking for a handout, this is struggling to survive within the system”
While I get that he might have written a paragraph of his experience, and asked ChatGPT to clean it up or reword it, it was just… whatever.
Likewise these tools have enabled many more people to create vibe-coded slop, and may lead to more quality software (making it harder to stand out without marketing), but the best software will only get better.
There's never been a better time to be an indie dev. I'd rather have 1/1000 indie games be awesome than being force fed whatever storefront disguised as a game 'AAA' publishers poop out every year.
Just look at how slay the spire is doing up against marathon right now. Which of those was shouting the loudest? Highguard anyone?
It is true that the indy game market is brutal but it's always been brutal.
You don't really hear about a crisis at the indy game level though, rather at the AAA game level there is much of "we'd like to use our market power to take out the risk in game development" and then years later we realize they took out all the value before they took out the risk and now they're doomed.
The reason that “skill at making a fun game” doesn’t guarantee success is because there are so many fun games. Much less, if at all, because there is so many slop.