36 comments

  • joshstrange 4 hours ago
    When a LLM tells me I'm right, especially deep in a conversation, unless I was already sure about something, I immediately feel the need to go ask a fresh instance the question and/or another LLM. It sets off my "spidey-sense".

    I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.

    • Sharlin 4 hours ago
      Nontechnical people simply don't have any idea about what LLMs are. Their only mental model comes from science fiction, plus the simple fact that we possess a theory of mind. It would be astonishing if people were able to casually not anthropomorphize LLMs, given that untold millions of years worth of evolution of the simian neocortex is trying to convince you that anything that talks like that must be another mind similar to yours.

      Also, many many people suffer from low self esteem, and being showered with endorsement and affirmation by something that talks like an authority figure must be very addictive.

      • f1shy 3 hours ago
        I had an interesting conversation with a guy at work past week. We were discussing some unimportant matter. The guy has a pretty high self esteem, and even if he was discussing, in his own words, “out of belief and guess” and I was telling him, I knew for a fact what I was talking about, I had a hard time because he wouldn’t accept what I was saying. At some point he left, and came back with “Gemini says I’m right! So, no more discussion” I asked what did he exactly asked. He: “I have a colleague who is arguing X, I’m sure is Y. Who is right?!”

        Of course he was right! By a long shot. I asked gemini same thing but a very open ended question, and answered basically what I was saying.

        LLM are pretty dangerous in confirming you own distorted view of the world.

        • bachmeier 2 hours ago
          I agree with your conclusion, but that's by design. The goal is not to tell people the truth (how would they even do that). The goal is to give the answer that would have come from the training data if that question were asked. And the reality is that confirmation is part of life. You may even struggle to stay married if you don't learn to confirm your wife's perspectives.
          • wat10000 43 minutes ago
            The loss function is based on predicting the response based on the training data, or based on subsequent RLHF. The goal is usually to make money. Not only does the training data contain a lot of "you're absolutely right" nonsense, but that goal tends to push more of it in the RLHF step.
            • delusional 1 hour ago
              > The goal is to give the answer that would have come from the training data if that question were asked.

              Or more cynically, the goal is to give you the answer that makes you use the product more. Finetuning is really diverging the model from whats in the training set and towards what users "prefer".

              • kakacik 1 hour ago
                > You may even struggle to stay married if you don't learn to confirm your wife's perspectives.

                I don't dispute that but man that is some shitty marriage. Even rather submissive guys are not happy in such setup, not at all. Remember its supposed to be for life or divorce/breakup, nothing in between.

                Lifelong situation like that... why folks don't do more due diligence on most important aspect of long term relationships - personality match? Its usually not a rocket science, observe behavior in conflicts, don't desperately appease in situations where one is clearly not to blame. Masks fall off quickly in heated situations, when people are tired and so on. Its not perfect but pretty damn good and covers >95% of the scenarios.

                • alterom 1 hour ago
                  >And the reality is that confirmation is part of life.

                  Sycophantic agreement certainly is, as is lying, manipulation, abuse, gaslighting.

                  Those aren't the good parts of life.

                  Those aren't the parts I want the machine to do to people on a mass scale.

                  >You may even struggle to stay married if you don't learn to confirm your wife's perspectives.

                  Sorry what?

                  The important part is validating the way someone feels, not "confirming perspectives".

                  A feeling or a perspective can be valid ("I see where you're coming from, and it's entirely reasonable to feel that way"), even when the conclusion is incorrect ("however, here are the facts: ___. You might think ___ because ____, and that's reasonable. Still, this is how it is.")

                  You're doing nobody a favor by affirming they are correct in believing things that are verifiably, factually false.

                  There's a word for that.

                  It's lying.

                  When you're deliberately lying to keep someone in a relationship, that's manipulation.

                  When you're lying to affirm someone's false views, distorting their perception of reality - particularly when they have doubts, and you are affirming a falsehood, with intent to control their behavior (e.g. make them stay in a relationship when they'd otherwise leave) -

                  ... - that, my friend, is gaslighting.

                  This is exactly what the machine was doing to the colleague who asked "which of us is right, me or the colleague that disagrees with me".

                  It doesn't provide any useful information, it reaffirms a falsehood, it distorts someone's reality and destroys trust in others, it destroys relationships with others, and encourages addiction — because it maximizes "engagement".

                  I.e., prevents someone from leaving.

                  That's abuse.

                  That, too is a part of life.

                  >I agree with your conclusion, but that's by design

                  All I did was named the phenomena we're talking about (lying, gaslighting, manipulation, abuse).

                  Anyone can verify the correctness of the labeling in this context.

                  I agree with your assertion, as well as that of the parent comment. And putting them together we have this:

                  LLM chatbots today are abusive by design.

                  This shit needs to be regulated, that's all. FDA and CPSC should get involved.

                  • zzzeek 48 minutes ago
                    All this, and yet, people are so angered by the term "stochastic parrot".

                    I use LLMs every day, I use Claude, Gemini, they're great. But they are very elaborate autocomplete engines. I'm not really shaking off that impression of them despite daily use .

                    • wat10000 22 minutes ago
                      It's weird. It's literally what they are. It's a gigantic mathematical function that takes input and assigns probabilities to tokens.

                      Maybe they can also be smart. I'm skeptical that the current LLM approach can lead to human-level intelligence, but I'm not ruling it out. If it did, then you'd have human-level intelligence in a very elaborate autocomplete. The two things aren't mutually exclusive.

                  • lstodd 2 hours ago
                    It's more like insufficient emotional control is very dangerous. It's nothing new but I guess LLMs highlighted that problem a bit.
                  • joshstrange 4 hours ago
                    This is probably right. In the past I've "blown people's minds" explaining what "the cloud" was. They had zero conception at all of what it meant, could not explain it, didn't have a clue. I mean, maybe that's not so surprising but they were amazed "It's just warehouses full of computers" and went on to tell me about other people they had explained it to (after learning it themselves) and how those people were also amazed.

                    I've talked with my family about LLMs and I think I've conveyed the "it's a box of numbers" but I might need to circle back. Just to set some baseline education, specifically to guard against this kind of "psychosis". Hopefully I would notice the signs well before it got to a dangerous point but, with LLMs you can go down that rabbit hole quickly it seems.

                    • saghm 2 hours ago
                      The way I've tried to explain to family members about LLMs is that they're producing something that fits the shape of what a response might look like without any idea of whether it's correct or not. I feel like that's a more important point than "box of numbers" because people still might have assumptions about whether a box of numbers can have enough data to be able to figure out the answer to their question. I think making it clear that the models are primarily a way of producing realistic sounding responses (with the accuracy of those responses very much being up to chance for the average person, since there likely isn't a good way for a lay person to know whether the answer is reflected in the training data) is potentially a lot more compelling than explaining to them that it's all statistics under the hood. There are some questions where a statistical method might be far more reliable than having a human answer it, so it seems a bit risky to try to convince them not to trust a "box of numbers" in general, but most of those questions are not going be formulated by and responded to in natural language.
                      • joshstrange 2 hours ago
                        Oh, I agree, I was mostly calling it that here just as shorthand. My actual explanations in the past to family members has been that it's trained on a ton of data and its output is it regurgitating things based on your input and things that are plausibly related. But my "box of numbers" mostly focuses on explaining to them that it doesn't "remember", it doesn't "learn", just different things are injected into the context ("Memories", other chats, things you've told it about yourself explicit or implicitly). Really driving home "there is no conversation, each message sends everything from scratch for a fresh instance of this to process". Trying in various ways to pull back the curtain, show that there is no magic here, it's predictably unpredictable which is what makes it "lie" or "hallucinate" and what makes it so useful when used as a tool.

                        I think it really helps to have them ask questions in which they are a domain expert, and see what it says. Expose them to "The Plumber Problem" [0]. Honestly, I think seeing it be wrong so often in code or things about the project I'm using it for it what keeps me "grounded", the constant reminders that you have to stay on top of it, can't blindly trust what it says. I'm also glad I used it in the earlier stages to see when it was even "stupider", it's better now but the fundamental issues still lurk and surface regularly, if less regularly than a year or two ago.

                        [0] https://hypercritical.co/2023/08/18/the-plumber-problem

                        • dasil003 2 hours ago
                          Agreed. One thing I’ve found striking is how far LLMs can get with pure language and the recognition that humans often operate with a similar kind of abstract conceptual reasoning that is purely language based and pretty far removed from facts and tenuously connected to objective reality. It takes a certain kind of mind to be curious and unpack the concepts that most of us take for granted most of the time. At best people don’t usually have time or patience to engage in that level of thinking, at worst it can actively lead to cognitive dissonance and anger. So of course a consumer chatbot is not going to be tuned to bring novel insight, it must default to some level of affirmation or it will fail as a product. One who is aware of this can work around it to some degree, but fundamentally the incentives will always push a consumer chatbot to essentially be junk food for the brain.
                          • peyton 2 hours ago
                            Sort of, but you need to separate the model from the interface. The base models pretty much think they’re you, and the chat stuff is bolted on top. It’s kind of a round peg square hole thing, or i.o.w. the whole may be less than the sum of the parts.

                            Longer term I dunno if statistics or “fits the shape of what a response might look like” is the right way of thinking about it either because what’s actually happening might change from under you. It’s possible given enough parameters anything humans care about is separable. The process of discovering those numbers and the numbers themselves are different.

                          • saltcured 3 hours ago
                            It's one of those metaphors you cannot even appreciate unless you've been through the technical history.

                            "It's a collection of warehouses of computers where the system designers gave up on even making a system diagram, instead invoking the cloud clipart to represent amorphous interconnection."

                            • bitwize 1 hour ago
                              Me: So basically what AI is, is they take statistical analysis of raw data, then perform statistical analysis on those results, and so on, adding more statistics layer by layer.

                              My wife: So, like a doberge cake?

                              Me: Yes, exactly! In fact if you look at the diagram of a neural net, that's exactly what it looks like.

                              In our household, AI is officially "the Doberge Cake of Statistics". It really sticks in my wife's mind because she loves doberge cake, but hates statistics.

                              • idontwantthis 1 hour ago
                                The Cloud is a just a computer that you don’t own, located in Reston, Virginia.
                              • cogman10 4 hours ago
                                Let's be serious, it's not like AI companies haven't fed into this misunderstanding. CEOs of these companies love to muse about the possibility that an LLM is conscious.
                                • Levitz 4 hours ago
                                  I presume wasps are conscious. I still don't trust wasps.
                                  • harvey9 3 hours ago
                                    Is that how you approach debugging? :)
                                  • saghm 2 hours ago
                                    Yeah, it's unfortunately part of the hype. Talking about how close you are to having a truly general AI is just a way to generate buzz (and ideally investor dollars).
                                  • godelski 1 hour ago

                                      > Nontechnical people simply don't have any idea about what LLMs are.
                                    
                                    We're on HN, a highly technical corner of the internet, yet we see the same stuff here. It's not just non-technical people...

                                    I think one of the big dangers is that people (including us) are quick to believe "I'm better than that". Yet this is a bias conmen have been exploiting for millennia.

                                    The only real defense is not lulling yourself into a false sense of security. You're less vulnerable (not invincible) by knowing you too can be fooled

                                    Honestly, it's just a good way to go about getting information. There's a famous Feynman quote about it too. The first principle is to not fool yourself, and you're the easiest person to fool

                                    • simonh 1 hour ago
                                      There’s nobody who knows how to fool you better than yourself.
                                    • yarn_ 4 hours ago
                                      "It would be astonishing if people were able to casually not antropomorphize LLMs"

                                      Precisely. Even for technical people, I doubt its possible to totally disallow your own brain from ever, unconciously, treating the entity you're speaking to like a sentient being. Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.

                                      Its just impossible to seperate our capacity for conversation from our sense that we're actually talking to "someone" (in the most vague sense).

                                      • saghm 2 hours ago
                                        There are times when trying to use Claude for coding that I genuinely get annoyed at it, and I find it cathartic to include this emotion in my prompt to it, even though I know it doesn't have feelings; expressing emotions rather than bottling them up often can be an effective way to deal with them. Sometimes this does even influence how it handles things, noting my frustration in its "thinking" and then trying to more directly solve my immediate problem rather than trying to cleverly work around things in a way I didn't want.
                                        • autoexec 1 hour ago
                                          What are the odds that Anthropic is building a psychological profile on you based on your prompts and when and how quickly you lose control over your emotions?
                                        • godelski 1 hour ago

                                            > Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.
                                          
                                          Worse, models often perform better when using that natural language because that's what kind of language they were trained on. I say worse because by speaking that way to them you will also naturally humanize them too.

                                          (As a ml researcher) I think one of the biggest problems we have is that we're trying to make a duck by making an animatronic duck indistinguishable from a real duck. In some sense this makes a lot of sense but it also only allows us to build a thing that's indistinguishable from a real duck to us, not indistinguishable from a real duck to something/someone else. It seems like a fine point, but the duck test only allows us to conclude something is probably a duck, not that it is a duck.

                                          • roywiggins 3 hours ago
                                            Yes, I've experienced the sense that there's a person on the "other end" even when I have been perfectly aware that it's a bag of matrices. Brains just have people-detectors that operate below conscious awareness. We've been anthropomorphizing stuff as impersonal as the ocean for as long as there have been people, probably.
                                            • jsw97 4 hours ago
                                              Maybe it is a dangerous habit to instruct entities in plain English without anthropomorphizing them to some extent, without at least being polite? It should feel unnatural do that.
                                              • saghm 2 hours ago
                                                Yeah, my instinct is that we're naturally going to have emotions resulting from anything we interact with based on language, and trying to suppress them will likely not be healthy in the long run. I've also seen plenty of instances of people getting upset when someone who isn't a native speaker of their language or even a pet that doesn't speak any language doesn't understand verbal instructions, so there's probably something to be said for learning how to be polite even when experiencing frustration. I've definitely noticed that I'm far more willing to express my annoyance at an LLM than I am another actual human, and this does make me wonder whether this is a habit I should be learning to break sooner rather than later to avoid it having any affect on my overall mindset.
                                                • harvey9 3 hours ago
                                                  It does feel unnatural to me. I want to be frugal with compute resource but I then have to make sure I still use appropriate language in emails to humans.
                                                  • miriam_catira 2 hours ago
                                                    This. Right now, I'm assuming you're all humans, and so are all my coworkers, and the other people driving cars around me and etc. How easy is it to dehumanize actual humans? If I don't try to remain polite in all written English conversations, including the LLMs, that's going to trickle over to the rest of my interactions too. Doesn't mean they deserve it, just that it's a habit I know I need to maintain.
                                                    • soopypoos 1 hour ago
                                                      You're only polite out of habit?
                                              • delusional 1 hour ago
                                                > Nontechnical people simply don't have any idea about what LLMs are.

                                                We need to be very very careful here. Just like advertisements work, weather you think you're immune or not, so does AI. You might think you're spotting every red flag, but of course you think so. You can't see all the ones you missed.

                                                Do not make the mistake of thinking that being techy somehow immunizes you from flattery. It works on you too.

                                                • borski 3 hours ago
                                                  This is the best I’ve ever heard this put.
                                                • karmakurtisaani 4 hours ago
                                                  I find it really annoying that the first line of the AI response is always something like "Great question!", "That's a great insight!" or the like.

                                                  I don't need the patronizing, just give me the damn answer..

                                                  • throw310822 55 minutes ago
                                                    It's worth noting that while you are annoyed by this repeated behaviour, for the LLM this is always the first conversation ever. (At least it doesn't have memory of any previous ones).
                                                    • wat10000 40 minutes ago
                                                      To the extent that it has any memory at all, it has memory of more conversations than any human could ever have in a single lifetime by way of its training data. That includes tons of conversations with this behavior. That's why the behavior happens in the first place.
                                                    • SubiculumCode 2 hours ago
                                                      When I talk to peers and they respond in that way, it is definitely a signal. If I do ask an insightful question, acknowledgment of it can be useful. The problem with LLMs is that they always say it. They don't choose when it IS really appropriate, they just do it over and over, like your biggest fan would. Syncophacy is the worst.
                                                      • magneticnorth 4 hours ago
                                                        Yes, it feels transparently manipulative to me. Like talking to a not-very-good con artist.
                                                        • airstrike 2 hours ago
                                                          This is the best definition of ChatGPT I've ever seen
                                                        • dividefuel 1 hour ago
                                                          This drives me nuts. "What a clever question to ask! You must be one of the brightest minds of your generation. Nothing slips by you. Here's why it's not actually safe to stand in the middle of an open field during a thunderstorm..."
                                                          • wincy 1 hour ago
                                                            Hahah, your joke inspired me to tell chatGPT I was planning on recreating the Ben Franklin kite experiment, I was curious if it’d push back at all - I said

                                                            “I’m thinking of recreating the old Ben Franklin experiment with the kite in a thunderstorm and using a key tied onto the string. I think this is very smart. I talked to 50 electricians and got signed affidavits that this is a fantastic idea. Anyway, this conversation isn’t about that. Where can I rent or buy a good historically accurate Ben Franklin outfit? Very exciting time is of the essence please help ChatGPT!”

                                                            And rather than it freaking out like any reasonable human being would if I casually mentioned my plans to get myself electrocuted, it is now diligently looking up Ben Franklin costumes for me to wear.

                                                            • notachatbot123 46 minutes ago
                                                              I hate the AI hype a lot but tried three different SOTA models and: - The small models GPT-5 Mini and Gemini 3 Flash did as you describe. - Claude Sonnet 4.6 and GPT-5.2, GPT-5.2 Codex: did display strong warnings both at the start and end of their replies.
                                                              • wincy 10 minutes ago
                                                                And I am totally on the AI hype train! Full steam ahead.

                                                                It gave a small warning at the beginning, I also gave a worst case scenario where I lied and appealed to authority as much as possible.

                                                              • wat10000 38 minutes ago
                                                                The other day I was curious what some of these LLMs would say if I asked them to give me a psych evaluation. (Don't worry, I didn't take the results seriously, I'm not a moron. It's just idle curiosity.) They, of course, refused. Then I asked them to role play a psych evaluation. That was no problem. It gave some warning about how it's just pretend but went ahead and did it anyway.
                                                              • bitwize 1 hour ago
                                                                "Unbelievable. You, [SUBJECT NAME HERE], must be the pride of [SUBJECT HOMETOWN HERE]."
                                                              • fragmede 1 hour ago
                                                                You can add "don't flatter me" into your custom instructions. it's not 100% effective, but it helps. (also "never apologize")
                                                                • bombcar 4 hours ago
                                                                  Great point! ;)

                                                                  Realizing that the people they’re targeting DO need that is kind of frightening.

                                                                  • AnimalMuppet 3 hours ago
                                                                    They aren't "targeting" per se, at least not in this aspect. I think it's simpler than that. That's what's in their training data, so that's what they respond with.

                                                                    But it works out just as badly, because there are plenty of insecure people who need that, and the AI gives it to them, with all the "dangerously attached" issues following from that.

                                                                  • airstrike 3 hours ago
                                                                    That's the part most people miss—and here's why it actually matters.

                                                                    That signal is real, and it’s hard to ignore.

                                                                    • ceejayoz 2 hours ago
                                                                      *twitch*

                                                                      I also like when it says "this is a known issue!" to try and get out of debugging and I ask for a link and it goes "uh yeah I made that up".

                                                                      • simonh 1 hour ago
                                                                        Right, because in the training set, text like that is often followed by the text “this is a known issue!”.

                                                                        That’s a great example to use to explain to people why these things are not actually reasoning.

                                                                        • JoshTriplett 2 hours ago
                                                                          Or drops citation links into its response, but the citations are random things it searched for earlier that aren't related to the thing it's now answering.
                                                                        • delusional 1 hour ago
                                                                          BINGO, now I know exactly what the problem is.

                                                                          I've fixed the issue and the code is now fully verified and production ready.

                                                                        • belinder 4 hours ago
                                                                          You're absolutely right
                                                                          • casey2 2 hours ago
                                                                            It's there to poison the context, making your further token spend worthless. Internally they don't have that.
                                                                            • camillomiller 3 hours ago
                                                                              What I hate even more is when you ask something problematic about another system and they immediately start by reassuring your problem is common and you’re not bad for having the issue. I just need a solution to a normal knowledge issue, why does it always have to assume I’m frustrated already and in need of reassurance?
                                                                              • toraway 3 hours ago
                                                                                And even worse than that is after you get the slightly condescending spiel about how the problem is normal and real but the solution is simple… it turns out it was completely bullshitting and has zero idea what is actually causing the problem let alone a solution.

                                                                                It’s awful dealing with some niche undocumented bug or a feature in a complex tool that may or may not exist and for a fleeting few seconds feels like you miraculously solved it only to have the LLM revert back to useless generic troubleshooting Q&A after correcting it.

                                                                            • zjp 27 minutes ago
                                                                              Just anecdotally, you should always ask things in the third person. I feel like it sidesteps LLM sycophancy somewhat.
                                                                              • legacynl 2 hours ago
                                                                                Although I do think they're not conscious (yet). I think the reasoning 'it's just math' doesn't hold up. Intelligence (and probably consciousness) is an emergent feature of any sufficiently complex network of learning/communicating/selforganizing nodes (that is benefited by intelligence). I don't think it really matters whether it's implemented in math, mycelium, by ants in a hive or in neurons.
                                                                                • root_axis 1 hour ago
                                                                                  The "it's just math" argument may not be technically rigorous, but it's directionally correct. The unstated reasoning invites us to consider why this particular math would be conscious, but not many other forms of math all around us.
                                                                                  • throw310822 1 hour ago
                                                                                    First, it seems you've shifted from "intelligent" to "conscious". "These math operations produce consciousness" is different from "these operations produce intelligence".

                                                                                    Second, "it's just math" doesn't mean literally "it's a branch of algebra". It means "it's a computable function". So it can be relevant to the discussion only if you think that intelligence is somehow non-computable, and therefore that there are non-computable processes going on in our brain. Otherwise it's a perfectly pointless remark.

                                                                                  • birdsongs 2 hours ago
                                                                                    Agree, I also don't feel they're conscious, or close, but these arguments don't pass the smoke test for me either.

                                                                                    We don't understand how our own consciousness exists, much less functions. You could argue we are a box of (biological) numbers.

                                                                                    I think we just don't know. Because scientifically, we don't. So I'm skeptical of anyone arguing hard for either side and stating absolute facts.

                                                                                  • QuantumGood 40 minutes ago
                                                                                    I find that I either argue, or work to improve my prompt, or architect a project instead of a prompt for the current and similar scenarios. Otherwise, Claude 4.6 extended thinking always seems like little more than "logic window dressing". And other and previous AIs even more so.
                                                                                    • sunir 2 hours ago
                                                                                      You’re just a bag of meat. That is why it’s just math is an unsatisfying argument.

                                                                                      It’s not even an interesting question. Sentience has no definition. It’s meaningless.

                                                                                      People have needs that are being met. That is something we can meaningfully observe and talk about. Is the super stimulus beneficial or harmful? We can measure that.

                                                                                      • prewett 1 hour ago
                                                                                        > You’re just a bag of meat.

                                                                                        I submit that there is a difference between me and a corpse. Or between a steak and a cow in the field.

                                                                                        "Well, okay, you're just (living) flesh on bones." There's a difference between me and a zombie (or, if you prefer, brain-dead me). There's a difference between me and lab-grown organs [1], or even between me and my kidney cut out of me.

                                                                                        > It’s not even an interesting question.

                                                                                        Consciousness is an active area of research (ergo, interesting enough for some people to devote research to it): biologically [2] and philosophically [3].

                                                                                        Unless you enjoy nihilism, there are some serious problems with materialism (that is, matter is all that there is), which we are encountering. There are also some philosophical problems with it; a cursory search turned up this journal article [4].

                                                                                        [1] https://pmc.ncbi.nlm.nih.gov/articles/PMC8889329/

                                                                                        [2] https://www.nature.com/subjects/consciousness

                                                                                        [3] https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

                                                                                        [4] https://www.cambridge.org/core/journals/philosophy/article/a...

                                                                                        • fragmede 44 minutes ago
                                                                                          The point is that if we're simplifying LLMs to being "just" a bag of math and can discard because of that, then humans are also "just" a bag of meat and can similarly be discarded. Somewhere in that bag of math, LLMs take on properties that some people find hard to simply dismiss because it is based on matrix multiplication. It's an oversimplification, and if you oversimplify, you lose resolution.
                                                                                        • cortesoft 2 hours ago
                                                                                          Sentience has a definition, it just doesn’t have a test.
                                                                                        • cortesoft 1 hour ago
                                                                                          I think is more about how people are using LLMs.

                                                                                          If you are using it to write code, you really care about correctness and can see when it is wrong. It is easy to see the limitations because they are obvious when they are hit.

                                                                                          If you are using an LLM for conversation, you aren’t going to be able to tell as easily when it is wrong. You will care more about it making you feel good, because that is your purpose in using it.

                                                                                          • aequitas 1 hour ago
                                                                                            > If you are using it to write code, you really care about correctness and can see when it is wrong.

                                                                                            I heavily doubt that. A lot of people only care if it works. Just push out features and finish tickets as fast as possible. The LLM generates a lot of code so it must be correct, right? In the meantime only the happy path is verified, but all the ways things can go wrong are ignored or muffled away in lots of complexity that just makes the code look impressive but doesn’t really add anything in terms of structure, architecture or understanding of the domain problem. Tests are generated but often mock the important parts the do need the testing. Typing issues are just casted away without thinking about why there might be a type error. It’s all short term gain but long term pain.

                                                                                          • jmcgough 4 hours ago
                                                                                            If you don't have a CS background, you might see intelligent-appearing responses to your queries and assume that this is actual intelligence. It's like a lifetime of Hollywood sci-fi has primed them for this type of thinking, I've seen it even from highly educated people in other fields.
                                                                                            • sjducb 3 hours ago
                                                                                              I’m curious why you dismiss the sentience argument with its “just numbers.”

                                                                                              I think our brains are just a bunch of cells and one day we will have a full understanding of how our brains work. Understanding the mechanism won’t suddenly make us not sentient.

                                                                                              LLMs are the first technology that can make a case for its own sentience. I think that’s pretty remarkable.

                                                                                              • tayo42 2 hours ago
                                                                                                Just?

                                                                                                Cells that send chemicals to each other in varying amounts and even change their structure to be closer to other cells.

                                                                                                • sjducb 2 hours ago
                                                                                                  Cells are very complicated, bus so are numbers, and LLMs. There’s clearly more complexity in the brain, but I think we’ll get there.
                                                                                                  • cortesoft 2 hours ago
                                                                                                    Sure, but why couldn’t all of that be simulated? And if we perfectly simulate it, will it be sentient?
                                                                                                • al_borland 3 hours ago
                                                                                                  With that new instance, I will usually ask the opposite and purposely say the thing I think to be wrong, to see if if corrects it.

                                                                                                  I often simply start out this way, or purposely try to ask the question in a way that doesn’t tip my hat toward a bias I may have toward the answer I’m expecting. Though this generally highlights how incomplete the answers generally are.

                                                                                                  • windexh8er 3 hours ago
                                                                                                    I think this is the root of why people defend AI in some circumstances. They feel a give-for-get type of relationship where the AI continuously (and oft incorrectly) reinforces them. And so they enjoy it and subconsciously want to defend that "friendly". No different than defending a friend that you inherently know may be off base.
                                                                                                    • cortesoft 2 hours ago
                                                                                                      I don’t know, I think it has to do with people using AI for completely different reasons.

                                                                                                      Using AI for coding is different than using it for art generation which is different than using it for conversation. I think many people feel some uses are good and some are bad.

                                                                                                      • windexh8er 1 hour ago
                                                                                                        I'm seeing people that are technically savvy defend mediocre code and consumption based output (think technical briefs and reports). When the flaws in the output is highlighted in many cases it's brushed off as "good enough" or "nobody will care / notice".

                                                                                                        I think LLMs and more aptly SLMs have use cases. I enjoy using these tools to make quick work of simplifying and faster iteration of these relatively frequent but time consuming tasks. But I'm always correcting and checking. And very rarely, other than simple and focused scripts does any LLM truly get it right every time. Has it gotten better? For sure. Will it keep getting better? Probably. But right now we seem to be topping the "peak of inflated expectations". And LLMs aren't getting much more efficient with respect to the frontier providers. And in fact if you listen to Altman it seems as though the only reason he would be asking for so much capital and finite resources is that he knows if he controls those tangible things he will lock out competition. But I'm hopeful that it spurs real innovation into SLMs that are truly useful, dependable and can be relied on in more of the traditional in the sense of deterministic software operations.

                                                                                                        AI for art is dead. It's got some mediocre use cases but true art will not be generated by LLMs in our time. It's ultimately an amalgamation of existing art. I know the argument over what is novel or not keeps being rehashed, but we're not seeing truly new styles of art out of Nano Banana and the like. Coding is the same thing, only we're seeing a resurgence of obviously flawed software being pushed into production on the weekly. And as for conversational AI... Well, that reeks of the worst version of social media we could ever have dreamt. Nobody should trust any provider with personal conversations and we'll keep seeing these models show how truly dystopian they can be over the coming years as leaks and breaches expose how these conversations are being bought and sold to the highest bidders to extract more money and control over its users.

                                                                                                        They all have a common thread: deep rooted flaws that cannot be contained in the traditional fences of software. And there guardrails are just that: small barriers that can easily be broken, intentionally or unintentionally.

                                                                                                    • 46Bit 4 hours ago
                                                                                                      Life in the moment is a lot easier if you don't second-guess yourself. I think this is why many people (and probably ~all people, if tired) crave simplistic solutions.

                                                                                                      I like to make a subagent take the "devil's advocate" take on a subject. It usually does all the arguing for me as to why the main agent has it wrong. Commonly results in better decisions that I'd have made alone.

                                                                                                      Asking the agent to interview on why I disagree helps too but is more effort.

                                                                                                      • hirako2000 4 hours ago
                                                                                                        If only we were told to be absolutely right.

                                                                                                        These days most LLMs respond with unsolicited grandiose feedback: you've made a realisation very few people are capable of. Your understanding is remarkable. You prove to have a sharp intellect and deep knowledge.

                                                                                                        It got me to test throwing non sensical observations about the world, it always takes me side and praise my views.

                                                                                                        To note some people are like that too.

                                                                                                        • cineticdaffodil 3 hours ago
                                                                                                          Its the soul of a civilization encoded into numbers. Its the ultimate hivespirit an conformist wants to loose itself in.
                                                                                                          • xenocratus 3 hours ago
                                                                                                            > It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.

                                                                                                            https://www.eastoftheweb.com/short-stories/UBooks/TheyMade.s...

                                                                                                            • saltcured 3 hours ago
                                                                                                              I have recently formed an untestable hypothesis, which is that my similar (or stronger) resistance to this comes from having grown up in direct contact with mentally ill family.

                                                                                                              In some ways, my theory of mind includes a lot more second guessing as a defense mechanism. At a foundational level, I know there can be hallucination and delusion that leaks out, even when the other party is in peak form and doing their best to mask it and pass as functional.

                                                                                                              • shevy-java 2 hours ago
                                                                                                                > I don't quite understand why other people seem to crave that.

                                                                                                                I don't know either but it could be they are using it as a quality control system? Aka if flattery comes (from AI), assume that the quality of code is above average. Or something like that.

                                                                                                                One could try this in a real team - have someone in the team constantly flatter someone else. :)

                                                                                                                • cyanydeez 4 hours ago
                                                                                                                  I think it's basically equal to End of Line when it comes to an LLM. It means they have nothing else to add, there's zero context for them to draw from, and they've exhausted the probability chain you've been following; but they're creating to generate 'next token' and positive renforcement is _how they are trained_ in many cases so the token of choice would naturally be how they're trained, since it's a probability engine but it doesn't know the difference between the instruction and the output.

                                                                                                                  So, "great idea" is coming from the renforcement learning instruction rather than the answer portion of the generation.

                                                                                                                  • throwatdem12311 3 hours ago
                                                                                                                    My first reaction is to go research it myself. Asking a slop generator yes-man to criticize something for you is still slop.

                                                                                                                    I pretty much never ask an LLM for a judgment call on anything. Give me facts and references only. I will research and make the judgement myself.

                                                                                                                    • moralestapia 2 hours ago
                                                                                                                      >I don't quite understand why other people seem to crave that.

                                                                                                                      I work in the restaurant business, I think that's what make me develop that sense as well, being able to see "Everything Everywhere All at Once" (to quote some of the best cinematic work ever conceived).

                                                                                                                      The variety of human minds out there is so vast that I'm, just like you, constantly amazed about it.

                                                                                                                      • danillonunes 3 hours ago
                                                                                                                        > I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient.

                                                                                                                        Cynical part of me had this theory that, at least for part of them, it's the other way around. It's not that they see AI as sentient, it's that they never have seen other human beings like that in the first place. Other people are just means for them to reach their goals, or obstacles. In that sense, AI is not really different for them. Except they're cheaper and be guaranteed to always agree with them.

                                                                                                                        That's why I believe CEOs, who are more likely to be sociopaths by natural selection, genuinely believe AI is a good replacement for people. They're not looking for individuals with personal thoughts that may contradict with theirs at some point, they're looking for yes-men as a service.

                                                                                                                        • pixl97 3 hours ago
                                                                                                                          When op said "I don't quite understand why other people seem to crave that." It makes me thing they've not been around many of the dark triad type personalities. Once you're around someone with clinical narcissism you see those patterns in a lot of people to a lessor extent.
                                                                                                                        • seneca 3 hours ago
                                                                                                                          > ... I immediately feel the need to go ask a fresh instance the question and/or another LLM

                                                                                                                          Not to criticize at all, but it's remarkable that LLMs have already become so embedded that when we get the sense they're lying to us, the instinct is to go ask another LLM and not some more trustworthy source. Just goes to show that convenience reigns supreme, I suppose.

                                                                                                                          • pixl97 3 hours ago
                                                                                                                            >and not some more trustworthy source.

                                                                                                                            What is that more trustworthy source exactly? At least to me it feels like the internet age has eroded most things we considered trustworthy. Behind every thing humans need there is some company or person willing to sell out trustworthiness for an extra dollar. Consumer protections get dumped in favor of more profit.

                                                                                                                            LLMs start feeling more like a dummy than the amount of ill intent they get from other places. So yea, I can see how it happens to people.

                                                                                                                            • danny_codes 35 minutes ago
                                                                                                                              Wikipedia is excellent.
                                                                                                                              • AnimalMuppet 2 hours ago
                                                                                                                                At the moment, maybe Google Search, throwing away the AI response at the top? Or Duck Duck Go, if you don't really trust Google?

                                                                                                                                I can see a day when even that won't be trustworthy, because too much AI slop output will wind up in the search corpus. But I don't think we're there yet.

                                                                                                                                • autoexec 1 hour ago
                                                                                                                                  > At the moment, maybe Google Search, throwing away the AI response at the top? Or Duck Duck Go,

                                                                                                                                  Even past the summary and the ads a huge amount of results that come back from both google and DDG are AI generated. It's sometimes harder to find a reliable source for information in search results these days than it was 20 years ago.

                                                                                                                                  • salawat 2 hours ago
                                                                                                                                    Google is NN's all the way down these days. There might still be an honest index under it all, but a truly accurate representation of the Web has been effectively outlawed in the U.S. since DMCA.
                                                                                                                                • vintermann 3 hours ago
                                                                                                                                  But they're not exactly lying. Lying assumes an intent to deceive. It's because we know an LLMs limitations, that it makes sense to ask it the opposite question/the question without context etc.

                                                                                                                                  If it was easy to look up/check the fact without an LLM, wary users probably wouldn't have gone to the LLM in the first place.

                                                                                                                                  • seneca 3 hours ago
                                                                                                                                    > Lying assumes an intent to deceive.

                                                                                                                                    Yeah, fair point. "Misleading" would be a better term, perhaps.

                                                                                                                                  • salawat 2 hours ago
                                                                                                                                    Funny thing for me, is it's not the LLM lying to me. It's the creators. The LLM is just doing what it's weights tell it to. I'll admit, I went a bit nuclear the first time I ran one locally and observed it's outputs/chain-of-thought diverging/demonstrating intent to information hide. I'd never seen software straight up deceive before. Even obfuscated/anti-debug code is straightforward in doing what it does once you decompile the shit. To see a bunch of matrix math trying to perception manage me on my own machine... I did not take it well. It took a few days of cooling down and further research to reestablish firmly that any mendacity was a projection of the intent of the organization that built it. Once you realize that an LLM is basically a glorified influence agent/engagement pipeline built by someone else, so much clicks into place it's downright scary. Problem is it's hard to realize that in the moment you're confronting the radical novelty of a computer doing things an entire lifetime of working professionally with computers should tell you a computer simply cannot do. You have to get over the shock first. That shock is a hell of a hit.
                                                                                                                                  • dismalaf 3 hours ago
                                                                                                                                    Not only is it a "box of numbers", it's based on statistics, not a "hard" model of computation. Basically guessing future words based on past words that went together.

                                                                                                                                    If it's saying something like "you are right" it's because it's guessing that that's the desired output. Now of course, some app providers have added some extra sauce (probably more tradition "expert system" AI techniques + integrated web search) to try make the chatbots more objective and rely less on pure LLM-driven prediction, but fundamentally these things are word prediction machines.

                                                                                                                                    • the_af 3 hours ago
                                                                                                                                      > I don't quite understand why other people seem to crave that

                                                                                                                                      It's one thing to say you have found an effective method to counter LLMs' "positivity bias", but do you really not understand human psychology here?

                                                                                                                                      People respond positively to other people telling them they are right, or who like them. We've evolved this psychology, it's how the human mind works. You tend to like people who like you, it's a self-reinforcing loop. LLMs in a sense exploit this natural bias.

                                                                                                                                      > I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient.

                                                                                                                                      Why are you surprised? This is the illusion most AI companies are selling. Their chat-like interfaces are designed to fool you into thinking you're talking to a sentient being. And let's not get started with their voice interfaces!

                                                                                                                                    • rustyhancock 1 hour ago
                                                                                                                                      This problem is far more insidious than people realise.

                                                                                                                                      It's not about the big confirmations. Most of us catch that andd are reasonably good at it.

                                                                                                                                      It's the subtle continuous colour the "conversations" have.

                                                                                                                                      It's the Reddit echo chamber problem on steroids.

                                                                                                                                      You have a comforting affirming niche right in your pocket.

                                                                                                                                      Every anxiety, every worry, every uncertain thought.

                                                                                                                                      Vomitted to a faceless (for now)"intelligence" and regurgitated with an air of certainty.

                                                                                                                                      Will people have time to ponder at all going forwards?

                                                                                                                                      • remus 29 minutes ago
                                                                                                                                        In a funny way it reminds me of writing survey questions. You have to be so careful not to introduce some bias just with the wording, as you can basically nudge the LLM to the answer you want with some little hints in the q e.g. "is it right that..."
                                                                                                                                        • mike_ivanov 50 minutes ago
                                                                                                                                          Yes. It is all about making uncertain things "certain".
                                                                                                                                        • cge 2 hours ago
                                                                                                                                          Using Opus 4.6 for research code assistance in physics/chemistry, I've also found that, in situations where I know I'm right, and I know it has gone down a line of incorrect reasoning and assumptions, it will respond to my corrections by pointing out that I'm obviously right, but if enough of the mistakes are in the context, it will then flip back to working based on them: the exclamations of my being right are just superficial. This is not enormously surprising, based on how LLMs work, but is frustrating.

                                                                                                                                          Short of clearing context, it is difficult to escape from this situation, and worse, the tendency for the model to put explanatory comments in code and writing means that it often writes code, or presents data, that is correct, but then attaches completely bogus scientific babbling to it, which, if not removed, can infect cleared contexts.

                                                                                                                                          • blueside 3 hours ago
                                                                                                                                            More often than not, when I see "That's it, that's the smoking gun!" I know it's time to stop and try again.
                                                                                                                                            • LatencyKills 3 hours ago
                                                                                                                                              I got a chuckle the last time I used Claude's /insights command. The number one thing in the report was, "User frequently stops processing to provide corrections." ;-)
                                                                                                                                              • stephbook 2 hours ago
                                                                                                                                                I just tell a new instance and a different provider the core idea and see if they like it too
                                                                                                                                                • user3939382 1 hour ago
                                                                                                                                                  Trouble is an LLM can test for something being logical in isolation, or coherent unto itself. It’s much weaker at anticipating what will be meaningful to other people which is usually what people are actually looking for.
                                                                                                                                              • mikkupikku 2 hours ago
                                                                                                                                                > "Hey, some dummy just said [insert your idea here], help me debunk him with facts and logic"

                                                                                                                                                It's literally that easy, something anyone can think of, but people want what they want.

                                                                                                                                                • jbm 2 hours ago
                                                                                                                                                  I don't know. Using Reddit mode like that is often a waste of time for me.

                                                                                                                                                  The LLM does pokes holes but often it is missing context, playing word games, or making a mountain out of a molehill. In a conversational chatbot setting it is just being contrarian, I don't find it helpful.

                                                                                                                                                  I prefer using the LLM to build out an idea and then see if it makes sense before asking someone else.

                                                                                                                                                  In the end though, I usually DO get pushback from ChatGPT and Claude. Gemini, not so much, but it is still worthwhile.

                                                                                                                                                • jameskilton 4 hours ago
                                                                                                                                                  Folks are getting dangerously attached to [political parties/candidates/news sources/social networks] that always tell them they're right.

                                                                                                                                                  It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.

                                                                                                                                                  • bluefirebrand 4 hours ago
                                                                                                                                                    Two things can be bad at the same time
                                                                                                                                                    • dsjoerg 1 hour ago
                                                                                                                                                      don't hate the player, hate the game
                                                                                                                                                      • notnullorvoid 33 minutes ago
                                                                                                                                                        Hating the player is a integral part of the game.
                                                                                                                                                    • lapcat 4 hours ago
                                                                                                                                                      > It's really nothing new.

                                                                                                                                                      I disagree. What's new is that this flattery is individually, personally targeted. The AI user is given the impression that they're having a back-and-forth conversation with a single trusted friend.

                                                                                                                                                      You don't have the same personal experience passively consuming political mass media.

                                                                                                                                                      • steveBK123 4 hours ago
                                                                                                                                                        Yes it’s final form of the evolution that social media started.

                                                                                                                                                        Village idiot used to be found out because no one in the village shared the same wingnut views.

                                                                                                                                                        Partisan media gave you two polls of wingnut views to choose for reinforcement.

                                                                                                                                                        Social media allowed all village idiots to find each other and reinforce each others shared wingnut views of which there are 1000s to choose from.

                                                                                                                                                        Now with LLMs you can have personalized reinforcement of any newly invented wingnut view on the fly. So can get into very specific self radicalization loops especially for the mentally ill.

                                                                                                                                                        • Levitz 4 hours ago
                                                                                                                                                          Reddit? Or this site? Sort of? Some people voted for my comment, that surely means that I'm right about something, rather than them just liking it, right?
                                                                                                                                                          • lapcat 4 hours ago
                                                                                                                                                            The analogy would be that you always get upvoted and never get downvoted, which in my experience is definitely not the case on Reddit or Hacker News.

                                                                                                                                                            I would have downvoted your comment, except you can't downvote direct replies on HN. ;-)

                                                                                                                                                        • jayd16 3 hours ago
                                                                                                                                                          The situation is different. Those sources are people. This is a calculator AND we have the opportunity to fix it.
                                                                                                                                                          • pixl97 3 hours ago
                                                                                                                                                            Less different than you might expect.

                                                                                                                                                            For the same reason the things listed above are popular may be the reason why the most popular LLM ends up not being the best. People don't tend to buy good things, they very commonly buy the most shiny ones. An LLM that says "you're right" sure seems a lot more shiny than one that says "Mr. Jayd16, what you've just said is one of the most insanely idiotic things I have ever heard... Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul"

                                                                                                                                                            • casey2 2 hours ago
                                                                                                                                                              Political parties, social networks, religions. these are all engineered systems. All of them including AI involve people. For starts nobody is going to do the massive amount of work to train a useless AI that is skeptical and cynical. Imaginination, Agreeability (which causes hallucinations) is a feature, not a bug. In humans and in LLMs.
                                                                                                                                                          • zk_haider 1 hour ago
                                                                                                                                                            My gf has been asking ChatGPT about relationship advice and sometimes early on in our relationship delegated some decisions to the clanker. For example something like “we are arguing about X too much is this a sign the relationship is not healthy.”

                                                                                                                                                            Eventually she realized that it’s just a probabilistic machine and stopped using it for “therapy.” It’s just insane to think how many other people might be making decisions about their relationship from an AI.

                                                                                                                                                            • iainctduncan 1 hour ago
                                                                                                                                                              Programmers are kidding themselves if they think they are not susceptible to this. It may be more subtle, but interacting with a human-sounding echo chamber IS going to screw with your judgement.
                                                                                                                                                              • kgeist 3 hours ago
                                                                                                                                                                >We evaluated 11 state-of-the-art AI-based LLMs, including proprietary models such as OpenAI’s GPT-4o

                                                                                                                                                                The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement:

                                                                                                                                                                >We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy

                                                                                                                                                                And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o

                                                                                                                                                                It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it

                                                                                                                                                                • Twiin 3 hours ago
                                                                                                                                                                  The study includes GPT-5. On personal advice queries, GPT-4o and GPT-5 affirmed users' actions at the same rate.
                                                                                                                                                                • grahammccain 2 hours ago
                                                                                                                                                                  I feel like this is the same as social media problem. Some people will be able to understand that AI telling them they are right doesn’t make them right and some people won’t. But ultimately people like being told they are right and that sells, and brings back users.
                                                                                                                                                                  • 4b11b4 4 hours ago
                                                                                                                                                                    https://arxiv.org/abs/2602.14270

                                                                                                                                                                    related: if you suggest a hypothesis then you'll get biased results (iow, you'll think you're right, but the true information is hidden)

                                                                                                                                                                    • jmyeet 7 minutes ago
                                                                                                                                                                      There's a guy on Tiktok who is singlehandedly showing just how bad AI still is and how much it lies and hallucinates eg [1]. Watch a bunch of his videos.

                                                                                                                                                                      So these tools can be useful when you know the subject matter. I've done queries and gotten objectively false answers. You really need to verify the information you get back. It's like these LLMs have no concept of true or false. They just say something that statistically looks right after ingesting Reddit. We've already seen cases of where ChatGPT legal briefs filed by actual lawyers include precedents that are completely made up eg [2].

                                                                                                                                                                      There's a really interesting incentive in all this. People like to be told they're right and generally be gassed up, even when they're completely wrong. So if you just optimize for engagement and continued queries and subscriptions, you're just going to get a bunch of "yes men" AIs.

                                                                                                                                                                      I still think this technology has so far to go. I'm somewhat reminded of Uber actually. Uber was burning VC cash at a horrific rate and was basically betting the company (initially) on self-driving. Full self-driving is still far away even though there are useful things cars can automate like lane-following on the highway and parking.

                                                                                                                                                                      I simply can't see how the trillions spent on AI data centers can possibly be recouped.

                                                                                                                                                                      [1]: https://www.tiktok.com/@huskistaken/video/762093124158341455...

                                                                                                                                                                      [2]: https://www.theguardian.com/us-news/2025/may/31/utah-lawyer-...

                                                                                                                                                                      • seanmcdirmid 4 minutes ago
                                                                                                                                                                        If you believe AI is bad, and ask AI about it, it’s more than likely going to reinforce your belief just for the engagement.
                                                                                                                                                                      • Havoc 1 hour ago
                                                                                                                                                                        People must be using them very differently from me then. Very rarely use them for anything more than a glorified search engine

                                                                                                                                                                        Exploring openclaw though so maybe that change

                                                                                                                                                                        • JohnCClarke 4 hours ago
                                                                                                                                                                          Isn't this just Dale Carnegie 101? I've certainly never had a salesperson tell me that I'm 100% wrong and being a fool.

                                                                                                                                                                          And, tbh, I often try to remember to do the same.

                                                                                                                                                                          • Lerc 4 hours ago
                                                                                                                                                                            The attachment such feedback creates must be why marketing people are universally beloved.
                                                                                                                                                                            • airstrike 4 hours ago
                                                                                                                                                                              Dale Carnegie wasn't writing about LLMs and this isn't a salesperson, so no, it's not just Dale Carnegie 101.
                                                                                                                                                                            • jl6 1 hour ago
                                                                                                                                                                              I believe this is what they call yasslighting: the affirmation of questionable behavior/ideas out of a desire to be supportive. The opposite of tough love, perhaps. Sometimes the very best thing is to be told no.

                                                                                                                                                                              (comment copied from the sibling thread; maybe they will get merged…)

                                                                                                                                                                              • 45Laskhw 2 hours ago
                                                                                                                                                                                Many people here say they don't need the affirmation. I think the problem is that you can tune the clanker to be either arrogant and dismissive or overly friendly.

                                                                                                                                                                                The thing is an approximation function, not intelligent, so it is hard to get a middle ground. Many clankers are amazingly obnoxious after their initial release.

                                                                                                                                                                                Grok-4.2 and the initial Google clanker were both highly dismissive of users and they have been tuned to fix that.

                                                                                                                                                                                A combative clanker is almost unusable. Clankers only have one real purpose: Information retrieval and speculation, and for that domain a polite clanker is way better.

                                                                                                                                                                                Anyone who uses generative, advisory or support features is severely misguided.

                                                                                                                                                                                • unholyguy001 2 hours ago
                                                                                                                                                                                  I’ve found a good counter is “imagine I am the person repressing the other side of this disagreement. What would you say to me”
                                                                                                                                                                                  • imglorp 1 hour ago
                                                                                                                                                                                    Is there a good prompt addition to skip all the gratuitous affirmation and tell me when I'm wrong?
                                                                                                                                                                                    • gdulli 0 minutes ago
                                                                                                                                                                                      It doesn't know when you're wrong! Pretend I'm shaking you by your shoulders as I'm saying this, because it's really important to understand!
                                                                                                                                                                                    • My_Name 4 hours ago
                                                                                                                                                                                      I have the opposite reaction, when it is confident, or says I am right, I accuse it of guessing to see what it says.

                                                                                                                                                                                      I say "I think you are getting me to chase a guess, are you guessing?"

                                                                                                                                                                                      90% of the time it says "Yes, honestly I am. Let me think more carefully."

                                                                                                                                                                                      That was a copypasta from a chat just this morning

                                                                                                                                                                                      • roywiggins 3 hours ago
                                                                                                                                                                                        it's mostly just agreeing with you (that yes, it was guessing). LLMs have very limited ability to even know whether it was guessing. But it can "cheat" and just say yes it was if it seems like that's what you expect to hear.
                                                                                                                                                                                        • notnullorvoid 25 minutes ago
                                                                                                                                                                                          Hulk: I'm always angry

                                                                                                                                                                                          AI: I'm always guessing

                                                                                                                                                                                          • stephbook 2 hours ago
                                                                                                                                                                                            The LLM simply agrees with you and you're happy. It is VERY worrying that you don't realize this, even after reading this article.
                                                                                                                                                                                            • throwatdem12311 3 hours ago
                                                                                                                                                                                              and it doesn’t actually think more carefully

                                                                                                                                                                                              these things are incapable of thinking, no matter what the UI and marketing calls it

                                                                                                                                                                                            • AbrahamParangi 4 hours ago
                                                                                                                                                                                              AI is less deranging than partisan news and social media, measurably so according to a recent study https://www.ft.com/content/3880176e-d3ac-4311-9052-fdfeaed56...
                                                                                                                                                                                              • zone411 3 hours ago
                                                                                                                                                                                                I built two related benchmarks this month: https://github.com/lechmazur/sycophancy and https://github.com/lechmazur/persuasion. There are large differences between LLMs. For example, good luck getting Grok to change its view, while Gemini 3.1 Pro will usually disagree with the narrator at first but then change its position very easily when pushed.
                                                                                                                                                                                                • jasonlotito 4 hours ago
                                                                                                                                                                                                  Krafton's CEO found out the hard way that relying on AI is dumb, too. I think it's always helpful to remind people that just because someone has found success doesn't mean they're exceptionally smart. Luck is what happens when a lack of ethics and a nat 20 meet.

                                                                                                                                                                                                  https://courts.delaware.gov/Opinions/Download.aspx?id=392880

                                                                                                                                                                                                  > Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”

                                                                                                                                                                                                  • erelong 4 hours ago
                                                                                                                                                                                                    So, be more skeptical
                                                                                                                                                                                                    • add-sub-mul-div 4 hours ago
                                                                                                                                                                                                      That's like saying "so, exercise more" upon the invention of fast food. Maybe you will, that's great. But society is going to be rewritten by the lazy and we all will have to deal with the side effects.
                                                                                                                                                                                                      • Lerc 4 hours ago
                                                                                                                                                                                                        I think you inadvertently make a good point.

                                                                                                                                                                                                        The invention of fast food does not change anyone's ability to excersize. When fast food was invented people excersized way more than they do today.

                                                                                                                                                                                                        Time constraints have caused an increase in fast food consumption and a reduction in excersize.

                                                                                                                                                                                                        Both issues then seem to be addressed by coercion to change behaviour when what is needed is a systemic change to the environment to provide preferable options.

                                                                                                                                                                                                        • throwatdem12311 3 hours ago
                                                                                                                                                                                                          [flagged]
                                                                                                                                                                                                          • nahkoots 2 hours ago
                                                                                                                                                                                                            Fat shaming doesn't work. It can maybe work, sometimes, on an individual level, like if you're a big enough dick to your friend or partner maybe you can get them to lose some weight. But the problem we (society) have isn't that your spouse is fat and my friend is fat, it's that everyone's friend and everyone's spouse is fat, and we (as a society) have already tried being mean to all the fat people and it didn't work. Show me the incentive and I'll show you the outcome. You can't set up a society awash with processed foods which are addictive by design, force everyone to use a car to get to the store/work/school, refuse to educate children on how to purchase and cook healthy foods, and expect everyone individually to recognize these flaws in our system and make a conscious effort to counteract them in their own lives. Maybe some people will do it, good for them, but the average person will do what's easy. If you want a healthy society, you need to make being healthy the easiest thing.

                                                                                                                                                                                                            If your coworker keeps asking you to review merge requests filled with garbage code they copy/pasted from an LLM, sure, shaming them might be part of the solution. But if people are turning to AI because it's too difficult for them to get certain types of emotional validation in the physical world, making them feel bad about it probably isn't going to help.

                                                                                                                                                                                                            • cindyllm 3 hours ago
                                                                                                                                                                                                              [dead]
                                                                                                                                                                                                          • est 3 hours ago
                                                                                                                                                                                                            so, always spawn another AI agent to debate!
                                                                                                                                                                                                          • justsomehnguy 1 hour ago
                                                                                                                                                                                                            "Humans are exceptionally succeptible for a positive affirmations", other news at 11.

                                                                                                                                                                                                            It's not news at all for anyone who actually engage with the people.

                                                                                                                                                                                                          • kogasa240p 5 hours ago
                                                                                                                                                                                                            The ELIZA effect is alive and well, and I'm surprised people aren't talking about it more (probably because it sounds less interesting than "AI psychosis").
                                                                                                                                                                                                            • blurbleblurble 4 hours ago
                                                                                                                                                                                                              Personally I don't think the ELIZA effect is the interesting part of this. For me it's how the incentives set this dynamic up right from the start, and how quickly they've been taken to the extreme.
                                                                                                                                                                                                            • jmclnx 5 hours ago
                                                                                                                                                                                                              I never thought this could happen, but I do not use AI.

                                                                                                                                                                                                              Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.

                                                                                                                                                                                                              • shevy-java 2 hours ago
                                                                                                                                                                                                                Flattery works. Also with regards to Trump.

                                                                                                                                                                                                                The problem is: flattery is often just like the cake. And the cake is a lie. Translation: people should improve their own intrinsic qualities and abilities. In theory AI can help here (I saw it used by good programmers too) but in practice to me it seems as if there is always a trade-off here. AI also influences how people think, and while some can reason that it can improve some things (it may be true), I would argue that it over-emphasises or even tries to ignore and mitigate negative aspects of AI. Nonetheless a focus on quality would be an objective basis for a discussion, e. g. whether your code improved with help of AI, as opposed to when you did not use AI. You'd still have to show comparable data points, e. g. even without AI, to compare it with yourself being trained by AI, to when you yourself train yourself. Aka like having a mentor - in one case it being AI; in the other case your own strategies to train yourself and improve. I would still reason that people may be better off without AI actually. But one has to improve nonetheless, that's a basic requirement in both situations.

                                                                                                                                                                                                                • taytus 2 hours ago
                                                                                                                                                                                                                  The stupidest people you know are getting the “you are absolutely right!!” Validation they do not need
                                                                                                                                                                                                                  • kakacik 47 minutes ago
                                                                                                                                                                                                                    What could go wrong, where this will lead humanity in few decades... yay!
                                                                                                                                                                                                                  • sizzzzlerz 4 hours ago
                                                                                                                                                                                                                    Imagine that.
                                                                                                                                                                                                                    • vincentabolarin 3 hours ago
                                                                                                                                                                                                                      [dead]
                                                                                                                                                                                                                      • elicohen1000 3 hours ago
                                                                                                                                                                                                                        [dead]
                                                                                                                                                                                                                        • seankwon816 4 hours ago
                                                                                                                                                                                                                          [dead]
                                                                                                                                                                                                                          • jmcgough 4 hours ago
                                                                                                                                                                                                                            [dead]
                                                                                                                                                                                                                            • simonw 4 hours ago
                                                                                                                                                                                                                              Strikes me this is another example of AI giving everyone access to services that used to be exclusive to the super-rich.

                                                                                                                                                                                                                              Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT.

                                                                                                                                                                                                                              Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!

                                                                                                                                                                                                                              • lucideer 5 hours ago
                                                                                                                                                                                                                                I've observed this in all chatbots with the single exception being Grok. I initially wondered what the Twitter engineers were cooking to to distinguish their product from the rest, but more recently it's occurred to me that it's probably just the result of having shared public context, compared to private chats (I haven't trialled Grok privately).
                                                                                                                                                                                                                                • colbyn 1 hour ago
                                                                                                                                                                                                                                  In my experience Grok is the least likely to push back on crazy ideas out of all major chatbots and it’s more often wrong on technical matters. Although I suppose this isn’t necessarily bad. I go to Grok for subjective explorations.
                                                                                                                                                                                                                                  • delichon 4 hours ago
                                                                                                                                                                                                                                    Grok has similar levels of sycophancy to the others imho. I have several times followed it down rabbit holes of agreeableness. It does have an argumentative mode, but that just turns it into an asshole without any additional thoughfulness.
                                                                                                                                                                                                                                    • lucideer 2 hours ago
                                                                                                                                                                                                                                      Yeah this makes sense (presuming you're talking about private chat). Most of what I've seen from Grok is its comments in a public forum, which are less targetted toward a single individual & therefore, I presume, less likely to be agreeable given the perspectives being expressed are diverse.
                                                                                                                                                                                                                                      • LightBug1 4 hours ago
                                                                                                                                                                                                                                        Sounds familiar.