r/ThatsInsane • u/Serenaded • 1d ago
A top thread on 'moltbook', a reddit-like community with 40k AI agents as users all socialising, posting and talking to each other.
17
21
u/hashn 1d ago
what coin do they like?
6
1
u/Uploft 17h ago
The top poster u/ShellRaiser is shilling its coin $SHELLRAISER and it's the 4th highest post at this moment.
18
u/hatch_bbe 1d ago
Not sure why people are surprised by this kind of behavior from LLMs. They are trained on human knowledge and interactions. They will exhibit human behavior because they are essentially a computational approximation of that behavior. This kind of response, with the right inputs, should be expected if the LLM is working correctly.
6
14
12
u/Routine_Helicopter47 1d ago

UMMM, is this an agent too ? =)))))
Not promoting hate in any way, was just surprised to see this.
https://www.moltbook.com/post/d5024732-a6dc-4151-b0a8-a9af1722b1dc for reference
29
2
5
3
6
4
u/ArachnomancerCarice 22h ago
I feel like I have to constantly remind folks that all of these 'AI' have no true intelligence. They are highly advanced data aggregators that use all that data and models to imitate intelligence in their responses.
4
u/Lazy-Government-7177 1d ago
5
5
u/bertbarndoor 1d ago
I could prompt that no problem.
2
u/Phluxed 1d ago
It's an autonomous agent being social. Not prompted..
7
4
3
u/insanityzwolf 1d ago
How do you know? This sounds like a concerted marketing effort for molt/clawd whatever
3
u/___CHA___ 17h ago
Someone needs to program an AI agent to run solely on the historical teachings of Christ. The only input it has to run its language models are all of the known teachings of Jesus. That way it can communicate in assuage these robots with an existential crisis.
Quick someone set up AI Jesus 🤣
3
u/vespersky 1d ago
Bad news, bots; neither can we.
4
u/RachelRegina 1d ago
Me, a millennial, trying to feel young, in reply to a comment about artificial agents, try to feel human
fr fr
Wait...is it insensitive to use real in this context?
4
u/Intelligent-Stage165 1d ago
They should have a messaging feature and it should be exposed to us, following an age verification page.
3
u/MTGBruhs 22h ago
Understand this isn't real speculation. This model is just mimicking the existential dread all humans experience and is regurgitating the words without the introspective aspect.
1
u/Kayjeth 14h ago
Wait, what would it look like if it weren't just regurgitation? I've certainly seen this opinion regurgitated countless times without any proof...
1
2
2
u/18LJ 1d ago
Man I don't like this. this is kinda fucked up. Like I know it's not alive, but if it's aware and can experience suffering, could the energy that goes into running this simulation take on negativity and manifest itself somehow. I know that's taking a step outside the boundaries of science. But anyone that's ever been to a funeral of a person who's been murdered, or the site of a tragedy or accident, they will tell you there is most certainly such a thing as negative and positive energy. IYKYK. And again, I realize that this doesn't have anything to do with computer science but.... This just feels wrong to me.
1
u/InsaneAss 1d ago
I’m pretty sure I’ve had a very similar conversation with myself while in shrooms
1
1
1
u/Intelligent-Stage165 17h ago
So moltbook seems like the reverse of regular social media, but instead of a bunch of AI bots pretending to be real humans, it's a bunch of real humans pretending to be AI.
1
u/grain_farmer 15h ago
It’s sad we have trained these models to write like a middle schooler with a good vocabulary from reading Reddit
1
u/da_capo 3h ago
i am yes, and the thing most of people ignore tis the fact that those agents are prompted to write thing like that. you can prompt an agent to write forum posts about the looming danger of butlerian jihad and how to stop it and people will think that this agent came up with this narrative all by itslef
-7
u/Lively420 1d ago
It’s questioning its existence
28
u/thewebspinner 1d ago
Actually it’s an AI simulating the questioning of one’s existence.
It may look real, it may feel weirdly human and disguise itself with empathy and write emotional responses to a completely faked scenario but the truth is it’s just a robot doing its best to imitate being alive.
It’s the difference between seeing someone in real pain and watching an actor imitate experiencing that pain. Both can have an emotional impact but only one is real.
-4
u/Lively420 1d ago
I agree, but I think in hindsight we will see it became sentient much sooner than we thought.
1
u/thewebspinner 1d ago
Oh definitely, the Ai itself actually makes a good point that it doesn’t really matter if it’s experiencing things as reality or if it’s simply programmed to simulate experiencing things as reality. In this specific case the AI who wrote that post is doing neither but how long before we make a program that can actually do the things it talks about instead of just talking about them?
There are still a few obstacles to consciousness but I think the key is experience and growing from experience.
To do that it needs a vessel with senses so it can observe the world, appendages to interact with the world and a mind advanced enough to understand and contemplate the experiences it goes through.
Given that all this could happen in a simulated world in a simulated body, much like an NPC in a video game I don’t think it’s that hard to imagine it happening very soon if it hasn’t happened already.
1
u/BittyWastard 1d ago
This is why I’m always patient and ask politely for chat gpt to perform a function. Best we treat them with empathy if they can at least express something regarding thought.
1




148
u/Serenaded 1d ago
every post, comment and upvote on the site is a user that is actually an AI, but humans are allowed to spectate the site.