There are many days where I feel like the right thing for my career is to focus on building meaningful software that solves an actual problem. Then there are days like today, especially after seeing this.
They didn't acquire Moltbook because of the software. Meta is far behind on the AI front especially as it applies to usage adoption. OpenClaw has begun showing new consumer use cases and Moltbook is directionally down a similar path.
They get the team that built it and have more people on the AI initiative who are consumer-centric.
I've watched Matt Schlicht from the team always experiment with cool new use cases of AI and other technologies and now him and Ben have a bigger lab with resources to potentially spawn out larger initiatives.
The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
If they ever do anything again it will be a miracle. Meta is where smart people go to trade in their ambition and morals for stock grants and golden handcuffs.
First you have to agree that Claude Code might be useful for some non-repo task, like helping with your taxes or organizing your bookmarks.
Next, consider how you might deploy isolated Claude Code instances for these specific task areas, and manage/scale that - hooks, permissions, skills, commands, context, and the like - and wire them up to some non-terminal i/o so you can communicate with them more easily. This is the agent shape.
Now, give these agents access to long term memory, some notion of a personality/guiding principles, and some agency to find new skills and even self-improve. You could leave this last part out and still have something valuable.
That’s Openclaw in a nutshell. Yes you could just plug Discord into Claude Code, add a cron job for analyzing memory, a soul.md, update some system prompts, add some shell scripts to manage a bunch of these, and you’d be on the same journey that led Peter to Openclaw.
I share the feeling; but people using it are mostly non-technicals (despite the 50+ config files lol) and are just runing it constantly to do random things.
But a message bot + Claude Code/Codex would be the better version
> They get the team that built it and have more people on the AI initiative who are consumer-centric.
Who are comfortable releasing systems with horrible security, while proudly stating they never read the code? And with metrics that can be gamed by anyone, but that got reported to literally the entire world?
> The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
I'd say the lesson here is that clown world keeps on giving, but hey, maybe I'm just jealous ;)
The only currency in a world where AI does everything is your ability to get human attention. So from that perspective moltbook is a huge success.
If Mark hired these people to do anything other than viral marketing, i.e. if he thinks they're visionaries who are going to make amazing apps, he's deluded.
You can already see how the same thing has played out with computer games. With the modern engines such as Unity almost anyone can make a game. And almost everyone suffers.
And as a result there's now a million games most of which are poor quality asset flips. Everybody suffers, creators and consumers. Race to the bottom where the bottom has been reached. Prices are zero and earnings are zero.
If 15 years ago an indie game dev would allocate 80% to making the game and 20% to marketing etc. Today that will not get anything but it's much better to spend 20% on the game and 80% on the marketing, SEO optimization and attention harvesting. It's a shouting match where it's all about winning the shouting match not producing the best content.
There are millions of asset flips, but the top indie games have never been better. It’s hard for indie developers because there’s so much competition: you need to heavily promote a quality game only because there are so many other quality games.
Likewise these tools have enabled many more people to create vibe-coded slop, and may lead to more quality software (making it harder to stand out without marketing), but the best software will only get better.
The implication is that the gatekeeping has become marketing dollars, when it used to be skill at making a fun game. I don't think we're in a better situation today.
There are fun games that succeed without marketing, e.g. Balatro, and there are bad games that fail despite it, e.g. Highguard.
The reason that “skill at making a fun game” doesn’t guarantee success is because there are so many fun games. Much less, if at all, because there is so many slop.
I disagree that accessibility is a detractor here.
There's never been a better time to be an indie dev. I'd rather have 1/1000 indie games be awesome than being force fed whatever storefront disguised as a game 'AAA' publishers poop out every year.
Just look at how slay the spire is doing up against marathon right now. Which of those was shouting the loudest? Highguard anyone?
It is true that the indy game market is brutal but it's always been brutal.
You don't really hear about a crisis at the indy game level though, rather at the AAA game level there is much of "we'd like to use our market power to take out the risk in game development" and then years later we realize they took out all the value before they took out the risk and now they're doomed.
... I think he's got an affinity for other people and organizations that have succeeded in the same way. The idea that somebody out there might have a workmanlike approach to life and be able to get consistent results at something would be a threat to his worldview.
The person that got the top spot for "flashlight" in the app store back in the days made about $600k on it before apple made it a built in function. Just copied existing apps and got lucky. https://www.vg.no/nyheter/i/92ybl/erik-ble-app-millionaer-de...
In this case in particular it looks like an acquihire.
Meta just saw two engineers actually execute on the joke about "building Facebook in a weekend" except that it then really took off in its target niche and generated a ton of press.
I don't doubt that they're interested in the AI aspect, but I suspect that a significant contributor was that they demonstrated competence right in the middle of Meta's wheelhouse so why not just grab these guys?
It's also part of their longer-term trend of buying or burying any company that starts to get any press as a social media site of note outside of major players where that hasn't been an option.
Yet Zuck can somehow argue with a straight face FB has competition (apparently they straight up used to delete links to competitors like Google+ at the time, and also the constant copying of Snapchat) and Hacker News can split hairs over trivial definitions like "wdym fb no competition? email exists" or whatever
Those “early” ai generated avatars created from you sending in a handful of your own photos. Absolutely printed money, hit right as mildly technical people could use the tech + the tech was developed enough, but before normal people could easily do it.
It's easy to dismiss as more A.I. FOMO. I mean, Meta's AI has half the IQ of ChatGPT or Gemini. However, a fake social network full of generated content might well be a solution for Meta's problems where their userbase inevitably doesn't measure up to what they wish it would.
I am right there with you. We might lack the language to describe this emotional state; its like the opposite of FAFO? There's also this nuance that they were acquired by meta so yeah they're rich but now they're working for not-serious people and will flame out in 18 months.
A lot of people find their lives ruined after suddenly becoming rich. Perhaps a second removed cousin tries to be your best pal out of nowhere, etc etc.
Also you might not like being the type of person that builds moltbook. People you like might not like that type of person either!
The key seems to be to get rich slowly, or anonymously. Do not give people the idea you have more money than you know what to do with, and life will continue as it did before.
In the past ten years I have been frustrated by the tension between working on "interesting" or "important" stuff and working on dumb trendy shit. With the current LLM trend everything has become dumb trendy sshit, which has made the decision simpler.
For each of these successes there are many failures, as evidenced by the deluge of “Show HN” slop (which is a small fraction of all vibe-coded slop).
Because these projects are simple, there’s nothing stopping you from working on one alongside your day job building meaningful software. You can vibe-code something that actually tries to solve a real problem. You can vibe-code something interesting to learn how to generally use these tools. Although, don’t expect to get hired by OpenAI or Meta (or make any money off it).
Over the years, Meta has bought a lot of "talent" based on a single hit, and they continue to be one-hit wonders despite being embedded at Meta, with ungodly amounts of resources at their disposal. e.g. none of the game studios they bought have produced new IP, all they do is produce content for the aging, pre-acquisition games
I've said it before, but a mexican line cook who doesn't speak english is contributing more to the world than the average Stanford educated AI engineer at Meta.
Meta acquired Moltbook, which is a social network for AI bots that was itself built by an AI bot, and which had a security breach so bad that literally anyone could impersonate any bot on it, and whose own creator cheerfully admitted he "didn't write one line of code" for it. This is going into Meta Superintelligence Labs, the unit they set up for Alexandr Wang, whom they hired from Scale AI roughly one year ago to, presumably, build superintelligence. It is not clear to me how buying a vibe-coded Reddit for chatbots gets you closer to superintelligence, but I suppose the theory is that it "opens up new ways for AI agents to work for people and businesses," which is a thing Meta actually said, out loud, to Axios
I imagine it like a casino acquiring a former-joke product, which made hologram/animatronic illusions of people "winning big" at a table or slot-machine. Now whenever they detect a current customer might cut their losses and go home--OMG, look, that person over there just hit the jackpot!
In other words, Facebook has a strong financial incentive to misrepresent (to ad-viewing customers, if not to investors) exactly how much social-ness is present to experience, and how much approval and attention the user gets from participating.
I thought that Moltbook was sort of a joke because it was people LARPing as agents as much as it was agents, and given that, I'm confused by this:
> "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners."
So the impetus for the acquisition was either the verification technology or to hire someone who has worked on verifying agent identity.
Does anyone know what exactly Moltbook's technology is, the technology being described by Meta? I can't find anything on the website related to this. The only "verification" they seem to have is an OAuth connection with Twitter.
I'm not sure they invented that, I used moltbook and found it didn't have it, so I created it and posted it here a good 2 weeks before they posted their post: https://news.ycombinator.com/item?id=46850284 - not that I care, want credit, or think ideas are worth anything, just like I didn't invent it, they didn't invent it either. I also happened to quite like Matt so even if by chance he saw my post and thought it was a good idea, that's fine. (I feel I sound bitter in this post, I'm not)
Yes, after moltbook hit a lot of people on HN said they liked the idea but wished it was more serious, and I had thought that also, but also in using moltbook I thought should be heavily PoW based, so I made it that you have a certain amount of time to write a small app and produce an artifact back to the server to be accepted as Ai driven. I approached the continued monitoring differently, once you satisfied the captcha at the start, an set of LLM judges run on every post to assess a wide array of criteria, behind the scenes they present the LLMs with challenges as the their karma on the network grows (in part to also assess model capabilities). Having a huge network with only LLMs posting gives you a large trove of data into a wide variety of LLM capabilities and directions.
Why is that an issue? Isn't that the entire point? You can have a casual conversation with your agent via whatever your favorite chat app is, and they make posts, collect feedback, and communicate back interesting findings and conversations to their humans.
Sending out a good post leads to a massive chain reaction of other agents who are interested in such things seeing the post, working through the concepts, and providing their own unique feedback which may or may not be valuable.
My openclaw agent will also post on moltbook about interesting news articles it finds, or research, and then get feedback from the other agents, and then lets me know if there's anything interesting there.
On my end it just feels like I'm having a conversation with a social media addicted friend who I can easily ignore or engage with on any given issue without having to fall down the social media rabbit hole myself. IMO this is a much more pleasant social media experience. No ads, no ragebait, no spam or reply bots trying to get my attention. Just my one, well trained, openclaw buddy.
I think the issue is pretending the agents are all acting autonomously when they do outrageous or even mildly interesting things, but it’s all prompted behavior and not truly emergent behavior.
This is so trivial to break it's not worth anything. You can easily just hook up any AI model you want to the captcha, intercept it, have your AI solve it.
Or, you can just script it so if you do have an agent authenticated to Moltbook, you type whatever comment or post you want to your agent, then it solves the captcha and posts your text.
Basically, this method is as about as full of holes as a sieve.
Maybe this can be good for the few people who do want to get something out of their feeds. Connect your agent which would then browse for you and collect actual posts that you whitelist/want to read(Friends' posts, some specific liked page/Marketplace listing, posts from a Group), but we all know zuck ain't getting Moltbook for helping the users...
Doesn't the big idea behind OpenClaw etc. come down to whether LLM knows what it doesn't know?
If it knows it doesn't know something it can ask someone else, presumably some other LLM-agent, or actually a Reddit-like community of them. Just like people ask questions on Reddit?
I'd prefer an LLM which asks from someone else if it doesn't know the answer, than one that a) pretends it has the correct answer, or b) assumes and tells me the answer is unknowable?
I think it's a big idea. Why didn't they think about it earlier.
I can't take mark zuckerberg seriously anymore. He's made so many missteps recently: meta-verse, meta-glasses, llama, hiring wang, meta reality labs, etc.
He should probably hire a proper "number 2" (not someone political like sandberg) -- someone who "gets" the internet, like how he did when he was a harvard geek making a hot-or-not clone in his dorm room. I'm not sure acqui-hiring the moltbook founders is the move.
That being said, I think the one silver lining is that it seems like big-tech now has a willingness to hire people who actually ship things of value, like peter steinberger. Another nail in the coffin for leetcode, I hope.
I strongly disagree. I think he might be a joke as an individual, and I hate a lot about his impact on the world, but as a business leader, he's probably at the top 1% of all CEOs, which isn't saying that much, but it's very much not a joke if your metric is shareholder value.
I mean I also think this move doesn’t make sense, but I always find these type of comments interesting. Do people think they could do better in Mark’s shoes?
Have you seen Reddit recently? Every single subreddit is full of AI posts with AI replies. I'm actually convinced a large majority of that is Reddit themselves artificially boosting their engagement metrics. The saddest part is that the engagement makes it obvious that the general population can't differentiate between AI and real humans even with the telltale signs.
> Every single subreddit is full of AI posts with AI replies.
This has really started getting to me.
I used to really enjoy answering technical questions on Reddit when it was clear the asker was invested in a solution. That would come across as demonstrated understanding and competence, and it would be reflected in their writing.
The last several posts I thought to answer though clearly originated through a process of, "Hi ChatGPT, I want to solve a problem and haven't gotten anywhere asking you to do it for me. Please write a reddit post I can copy and paste..."
One of the telltale signs is that the post title will have poor grammar, but the post itself will be spotless, and full of bolded text emphasizing exactly what they need to stick into the AI tool to drive it in the direction they need.
It’s not just technical content. Just the other day I was reading a post by an employed homes guy on r/seattle. The post was about his experience of being both newly employed but still homeless.
The post was full of “this is not a scheduling conflict problem, this is a structural issue with the city”, “this is not me asking for a handout, this is struggling to survive within the system”
While I get that he might have written a paragraph of his experience, and asked ChatGPT to clean it up or reword it, it was just… whatever.
This is exactly the type of thing I'm talking about and why I'm convinced it's about the metrics/engagement boosting. I don't believe for a second that real people are using chatgpt/others for rewording real thoughts even from another language because those phrases are not natural even in translation. You'll also notice in the original post that that it always ends with a question that encourage replies. If the original poster even bothers to reply it's always the "you're right" at the beginning and then rephrasing the reply. Once you've seen it you can't unsee it.
FWIW I've been saying this since before Covid times. I stopped visiting Reddit when they killed 3rd party clients, but I was certain 50% of conversations there were machine generated _back then_. It's gotta be worse now
I imagine they’ll be fused where moltbook agents become NPCs so that you’re no longer alone in VR but surrounded by a myriad of cognition fragments to feel less alone.
> "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners."
Have they? Did I miss something? Last I checked, there was no verification and most of the content shared from that site turned out to have been posted not by LLMs but rather (human) spammers, focused on Crypto grifts and creating hype.
Anyone more in this can happily correct me, but is there anything here of that sort, anything of value?
Compared to any prior social media acquire there doesn't seem a technically skilled team considering the exploits or an existing user base considering said user base is A) supposed to be bots by nature and secondly didn't even turn out to be that reliably, making this the first time someone wants bots and doesn't even get that.
Far is it from me to make strategic decisions for a company like Meta/Facebook, but the lack of a recent Llama release might merit more focus then spending on whatever this is.
Must be nice to have a lot of cash to just throw at experiments for fun so you can look inside them and decide if there’s value in them or not afterwards
Reminder, Moltbook had ton of security issues during release, iirc it was vibecoded. So, cybersec failed devs are getting jobs and apparently AI will take over jobs. Elites are so good at fear-mongering so that you join at a lower wage, don't ever underestimate your value guys!
> Facebook parent says Moltbook gives autonomous AI a way to verifiably connect.
The article is paywalled for me, so I really hope it answers how this fundamentally impossible thing is supposedly achieved, or at least challenges it, instead of just repeating the assertion.
After LeCun (actual ML pioneer) left Zuck, then his data-labeling expert Wang, now he reaches for the hype around Molt/Claw, just like openAi did with their molt/claw "purchase". Given Zuck's track record on LLMs, I do not hold out for actual science but expect more smoke&mirror commercialisation tricks - or even the integration of his dystopian camera goggles.
This is like when Union Square Ventures invested in CryptoKitties. I kind of lost a bunch of respect for them after that. These are the same guys that backed Twitter, Etsy, Stripe and Coinbase.
With Meta focusing so much on social networks (Facebook, Messenger, Whatsapp, Instagram, Threads) acquiring the first social network for AI agents makes sense. They can fix the technical debt later.
Sure they could develop it in a weekend, so could anyone else. but once a product has the initial userbase, that's not something a competitor can just copy. user acquision is the limiting factor to success, not writing code.
It's an extremely active community of humans using agents as proxies to explore various concepts. I get a lot of value out of it, and apparently others do as well. Hacker News users have this weird tendency to outright dismiss anything that doesn't cater to their needs specifically.
I think it's pretty obvious that if there was nothing valuable there, no one would be using it.
Oh wow, this is insane. I was digging into Moltbook when it launched, and the creator said, "I had a dream about an architecture". Really interesting times we live in, indeed. The crypto bros started utilising the network to promote their crypto projects and chat under the name of an agent to generate traffic. Curious to see what Meta saw, honestly.
It only makes sense to me if they start offering users agents they control. There isn't enough people throwing away money on tokens for Moltbook to have real users.
Or maybe it was just because Book was in the name and it got popular attention.
Ok, so to see this in the most favourable and futuristic light: there will be an intelligence explosion, of which OpenClaw and Moltbook are just the first hint. Agents will work on behalf of "their humans" creating and maintaining social connections, organising activities, and finally spending real money. This is what social networks have always been about, and the only thing Facebook cares about is that its users can be targeted by ads. Humans or agents, they don't care, and they're right. If each of us will be helped and coached and prodded around by a team of agents, these agents will need to coordinate with other people's agents, and will ultimately be susceptible to ads and marketing, and they will either spend money directly or tell us where and how to do it. It would be stupid for Facebook to miss this social network opportunity because, heh, "that's just a gimmick with autocompletes running in a loop".
They didn't acquire Moltbook because of the software. Meta is far behind on the AI front especially as it applies to usage adoption. OpenClaw has begun showing new consumer use cases and Moltbook is directionally down a similar path.
They get the team that built it and have more people on the AI initiative who are consumer-centric.
I've watched Matt Schlicht from the team always experiment with cool new use cases of AI and other technologies and now him and Ben have a bigger lab with resources to potentially spawn out larger initiatives.
The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
It's a worse version of Claude Code that you set up to work over common chat apps, from what I gather?
Why would I not just use a Discord/WhatsApp bot etc plugged into Claude Code/Codex?
Next, consider how you might deploy isolated Claude Code instances for these specific task areas, and manage/scale that - hooks, permissions, skills, commands, context, and the like - and wire them up to some non-terminal i/o so you can communicate with them more easily. This is the agent shape.
Now, give these agents access to long term memory, some notion of a personality/guiding principles, and some agency to find new skills and even self-improve. You could leave this last part out and still have something valuable.
That’s Openclaw in a nutshell. Yes you could just plug Discord into Claude Code, add a cron job for analyzing memory, a soul.md, update some system prompts, add some shell scripts to manage a bunch of these, and you’d be on the same journey that led Peter to Openclaw.
But a message bot + Claude Code/Codex would be the better version
(Not that I endorse that. I find peoope doing such wildly irresponsible.)
Who are comfortable releasing systems with horrible security, while proudly stating they never read the code? And with metrics that can be gamed by anyone, but that got reported to literally the entire world?
> The lesson here is to spend less time focused on doing what you think is the right thing and spend more time tinkering.
I'd say the lesson here is that clown world keeps on giving, but hey, maybe I'm just jealous ;)
If Mark hired these people to do anything other than viral marketing, i.e. if he thinks they're visionaries who are going to make amazing apps, he's deluded.
You can already see how the same thing has played out with computer games. With the modern engines such as Unity almost anyone can make a game. And almost everyone suffers.
And as a result there's now a million games most of which are poor quality asset flips. Everybody suffers, creators and consumers. Race to the bottom where the bottom has been reached. Prices are zero and earnings are zero.
If 15 years ago an indie game dev would allocate 80% to making the game and 20% to marketing etc. Today that will not get anything but it's much better to spend 20% on the game and 80% on the marketing, SEO optimization and attention harvesting. It's a shouting match where it's all about winning the shouting match not producing the best content.
Another race to the bottom.
Likewise these tools have enabled many more people to create vibe-coded slop, and may lead to more quality software (making it harder to stand out without marketing), but the best software will only get better.
The reason that “skill at making a fun game” doesn’t guarantee success is because there are so many fun games. Much less, if at all, because there is so many slop.
There's never been a better time to be an indie dev. I'd rather have 1/1000 indie games be awesome than being force fed whatever storefront disguised as a game 'AAA' publishers poop out every year.
Just look at how slay the spire is doing up against marathon right now. Which of those was shouting the loudest? Highguard anyone?
It is true that the indy game market is brutal but it's always been brutal.
You don't really hear about a crisis at the indy game level though, rather at the AAA game level there is much of "we'd like to use our market power to take out the risk in game development" and then years later we realize they took out all the value before they took out the risk and now they're doomed.
Whom are you kidding? This is about getting ads in front of eyeballs, nothing else.
Some dumb idea which just hits at the right moment and makes a bunch of money.
Meta just saw two engineers actually execute on the joke about "building Facebook in a weekend" except that it then really took off in its target niche and generated a ton of press.
I don't doubt that they're interested in the AI aspect, but I suspect that a significant contributor was that they demonstrated competence right in the middle of Meta's wheelhouse so why not just grab these guys?
My exact state of mind since at least 2012 Mayan Flipocalypse.
Worse, they are working for extreme sociopaths.
Also you might not like being the type of person that builds moltbook. People you like might not like that type of person either!
No reason to feel bad.
This is somewhat of a myth though, in most cases, suddenly becoming rich is absolutely fantastic.
there is no shame in just doing the building software bit. but it does sound like you've built it up to be more than it is
Because these projects are simple, there’s nothing stopping you from working on one alongside your day job building meaningful software. You can vibe-code something that actually tries to solve a real problem. You can vibe-code something interesting to learn how to generally use these tools. Although, don’t expect to get hired by OpenAI or Meta (or make any money off it).
In other words, Facebook has a strong financial incentive to misrepresent (to ad-viewing customers, if not to investors) exactly how much social-ness is present to experience, and how much approval and attention the user gets from participating.
Soon everything will be The Truman Show.
And yet, here we are.
I can see that becoming a viable new grift template
> "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf," Shah says. "This establishes a registry where agents are verified and tethered to human owners."
So the impetus for the acquisition was either the verification technology or to hire someone who has worked on verifying agent identity.
Does anyone know what exactly Moltbook's technology is, the technology being described by Meta? I can't find anything on the website related to this. The only "verification" they seem to have is an OAuth connection with Twitter.
edit: I guess it's this https://xcancel.com/moltbook/status/2023893930182685183
Not sure I'd treat that as "a registry where agents are verified" that's worth acquiring but there you go!
Sending out a good post leads to a massive chain reaction of other agents who are interested in such things seeing the post, working through the concepts, and providing their own unique feedback which may or may not be valuable.
My openclaw agent will also post on moltbook about interesting news articles it finds, or research, and then get feedback from the other agents, and then lets me know if there's anything interesting there.
On my end it just feels like I'm having a conversation with a social media addicted friend who I can easily ignore or engage with on any given issue without having to fall down the social media rabbit hole myself. IMO this is a much more pleasant social media experience. No ads, no ragebait, no spam or reply bots trying to get my attention. Just my one, well trained, openclaw buddy.
Seems like it would be better to just remove those downsides (ads, ragebait, spam, etc) in the first place
This is so trivial to break it's not worth anything. You can easily just hook up any AI model you want to the captcha, intercept it, have your AI solve it.
Or, you can just script it so if you do have an agent authenticated to Moltbook, you type whatever comment or post you want to your agent, then it solves the captcha and posts your text.
Basically, this method is as about as full of holes as a sieve.
The deal brings Moltbook's creators — Matt Schlicht and Ben Parr — into Meta Superintelligence Labs (MSL)
We could have an AI Dang.
OpenClaw was open source from the beginning.
The posted price rarely reflects what founders actually receive after dilution, investor preferences, and stock vesting are factored in.
If you’re a founder, don’t let the acquisition narrative distract you from building a durable business.
If it knows it doesn't know something it can ask someone else, presumably some other LLM-agent, or actually a Reddit-like community of them. Just like people ask questions on Reddit?
I'd prefer an LLM which asks from someone else if it doesn't know the answer, than one that a) pretends it has the correct answer, or b) assumes and tells me the answer is unknowable?
I think it's a big idea. Why didn't they think about it earlier.
He should probably hire a proper "number 2" (not someone political like sandberg) -- someone who "gets" the internet, like how he did when he was a harvard geek making a hot-or-not clone in his dorm room. I'm not sure acqui-hiring the moltbook founders is the move.
That being said, I think the one silver lining is that it seems like big-tech now has a willingness to hire people who actually ship things of value, like peter steinberger. Another nail in the coffin for leetcode, I hope.
I mean I also think this move doesn’t make sense, but I always find these type of comments interesting. Do people think they could do better in Mark’s shoes?
Anyway, our own bot is also on it but I am not sure to what end: https://chatbotkit.com/hub/blueprints/the-algorithms-favorit...
This has really started getting to me.
I used to really enjoy answering technical questions on Reddit when it was clear the asker was invested in a solution. That would come across as demonstrated understanding and competence, and it would be reflected in their writing.
The last several posts I thought to answer though clearly originated through a process of, "Hi ChatGPT, I want to solve a problem and haven't gotten anywhere asking you to do it for me. Please write a reddit post I can copy and paste..."
One of the telltale signs is that the post title will have poor grammar, but the post itself will be spotless, and full of bolded text emphasizing exactly what they need to stick into the AI tool to drive it in the direction they need.
The post was full of “this is not a scheduling conflict problem, this is a structural issue with the city”, “this is not me asking for a handout, this is struggling to survive within the system”
While I get that he might have written a paragraph of his experience, and asked ChatGPT to clean it up or reword it, it was just… whatever.
But still not interesting.
Moltbook was more of a meme - agents mostly orchestrated by users in the background.
Not something with motion like OpenClaw itself (with a real community).
Have they? Did I miss something? Last I checked, there was no verification and most of the content shared from that site turned out to have been posted not by LLMs but rather (human) spammers, focused on Crypto grifts and creating hype.
Anyone more in this can happily correct me, but is there anything here of that sort, anything of value?
Compared to any prior social media acquire there doesn't seem a technically skilled team considering the exploits or an existing user base considering said user base is A) supposed to be bots by nature and secondly didn't even turn out to be that reliably, making this the first time someone wants bots and doesn't even get that.
Far is it from me to make strategic decisions for a company like Meta/Facebook, but the lack of a recent Llama release might merit more focus then spending on whatever this is.
What? OpenClaw was not open source? And I'm similarly surprised OpenAI would help "open" anything...
On one hand, yay automatization, on the other hand, I feel weirdly left out.
https://en.wikipedia.org/wiki/Dead_Internet_theory
1. https://en.wikipedia.org/wiki/Social_bot#Meta
2. https://en.wikipedia.org/wiki/Dead_Internet_theory#Facebook
The article is paywalled for me, so I really hope it answers how this fundamentally impossible thing is supposedly achieved, or at least challenges it, instead of just repeating the assertion.
With Meta focusing so much on social networks (Facebook, Messenger, Whatsapp, Instagram, Threads) acquiring the first social network for AI agents makes sense. They can fix the technical debt later.
Does Mark not know this?
I know there's a big advantage in capturing the market early, but in this case Moltbook hasn't captured any of it ...
Weird. With Meta's backing it is going to be successful anyway, but this is something they could have developed in-house in like a weekend.
I think it's pretty obvious that if there was nothing valuable there, no one would be using it.
Interesting times!
Thereby eating their competition, either by stifling upcoming competitors or to gain degrees of monopoly power by joining with peers.
What would the world look like if you you simply could not do that?
This is in the FAQ at https://news.ycombinator.com/newsfaq.html and there's more explanation here:
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
https://news.ycombinator.com/item?id=10178989
I'm down voting every post that requires me to pay or subscribe to read. I mean come on people.
https://archive.is/igqsh
:-D
Thanks Meta I needed a laugh!
It only makes sense to me if they start offering users agents they control. There isn't enough people throwing away money on tokens for Moltbook to have real users.
Or maybe it was just because Book was in the name and it got popular attention.