48 comments

  • crazygringo 1 day ago
    > “We have high confidence that the actor likely leveraged an A.I. model to support the discovery and weaponization of this vulnerability,” the report said.

    I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

    I'm not being snarky or critical, I'm genuinely wondering what about an attack could possibly indicate it was discovered with LLM assistance?

    Like, unless the attackers' computers have been seized and they've been able to recover the actual LLM transcript history? But nothing in the article indicates that the hackers have been caught, just that a patch was developed.

    • bigp3t3 1 day ago
      From Google's GTIG report: https://cloud.google.com/blog/topics/threat-intelligence/ai-...

      "Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability. For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class) "

      • adrian_b 1 day ago
        This only indicates that an AI coding agent was used to write an exploit.

        No such circumstantial evidence can prove that an AI model has been used to find the bug.

        Of course, it is quite likely that an AI model was used to speed up the search for bugs, but this can never be proven as long as you see only the code used to exploit the bug.

        • trollbridge 19 hours ago
          It’s analogous to saying “Hackers used an IDE to write an exploit.”
          • batshit_beaver 19 hours ago
            Oh no, we should create a fear mongering blog post and delay the latest IDE version until we have better security in place!
            • jeltz 17 hours ago
              This is more like if JetBrains wrote a blog post about the dangers of IDEs.
          • otherme123 20 hours ago
            They don't say "proven", they say "we have high confidence that the actor likely leveraged an A.I". Do you find that assert too different from your "it is quite likely that an AI model was used to speed up the search for bugs"?
            • crazygringo 17 hours ago
              Exactly. Making the discovery and then exploiting it are two totally separate things.

              The latter in no way implies the former. But it sure does make good press.

              • jguarnelli 21 hours ago
                [dead]
              • SkiFire13 1 day ago
                That's evidence the script was written by an AI, but not necessarily that the exploit was found by it.
                • riedel 1 day ago
                  I think it would be rather worth reporting these days if hackers totally handcrafted all code without any use of any AI.
              • ndr42 19 hours ago
                "Although we do not believe Gemini was used"

                I don't get the "although": Are they happy that Gemini was not used in cybercrime oder are they bothered because somebody used a (probably better) alternative?

                • blitzar 1 day ago
                  The post reads like Ai wrote it - from that I can deduce that all strategy at google has been generated by Ai.
                • chromacity 1 day ago
                  > I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

                  Google, Cloudflare, and Microsoft are a trio of companies that get to see most of what's going on the internet. I imagine that if they see you attacking them, they can work back from that and get remarkably far, even against sophisticated actors. If it's their LLM, they presumably keep transcripts. If you searched for the affected API function via a search engine, they almost certainly know. Even if you used a competing search product, you probably went to a site that has Google Analytics. Oh, and one of these companies probably has your DNS lookups. And a good chunk of the world's email traffic. And telemetry from your workstation. And auto-uploaded crash reports... And if it's bad, they can work together behind the scenes to get to the bottom of it.

                  So, when their threat intel orgs say they have high confidence in something, I'd be inclined to believe it.

                  • Hupriene 1 day ago
                    None of what you've said is untrue. And if this was an internal report to an executive, I'd agree with it. But this is a public statement and I'm more inclined to believe that this is part of a coordinated run up to a move to ban the import of 'dangerous' Chinese AI models -- or something else equally self serving -- than a simple statement of truth.

                    I don't doubt that they found some evidence of AI use. I'm just skeptical that the amount and strength of evidence has anything to do with their making this statement.

                    I've been thinking about why the AI companies are making so much use of fear based marketing. And I'm wonder if it isn't just naked Machiavellianism at work.

                    For a long time tech companies were forced to compete for power by being the most loved (or at least not the most hated). But now they've found an avenue to cultivate fear.

                    • cybercatgurrl 10 hours ago
                      i’m inclined to agree. it sounds like yet another attempt at regulatory capture. keep anyone else from developing or using models including open weight models
                      • fragmede 17 hours ago
                        Anthropic has fallen behind with Opus 4.7 a downgrade from 4.6, and codex 5.5 being noticably better. Everyone I know (which is an obviously small, biased sample) has switched over to codex. So Anthropic can fear monger about Mythos all they want, they're losing revenue because they haven't released it and their competitor is getting that revenue. But that's looking at individual players in the market.
                    • DrewADesign 1 day ago
                      Well, it’s great marketing for LLM products at the enterprise level. Even if they weren’t sure, they have every incentive to run with it now, and the issue a “whoopsie daisy” apology later after the tech media stopped paying attention.
                      • dragonelite 1 day ago
                        [flagged]
                        • jatora 1 day ago
                          Are you roughly comparing the long term viability of LLMs to NFTs as if they are anywhere in the same realm?
                          • ipaddr 1 day ago
                            How long can llms exist at this current price level? Once they raise prices the market gets split. One side is the companies who will pay the increases and the other side is the public portals which become unaffordable. Public side might compare to NFTs while the other looks like more like the cloud where companies will overpay for better features they don't really need.
                            • synarchefriend 23 hours ago
                              We have open-weight LLMs like DeepSeek that prove the cost of running inference with near-frontier models can be very cheap.
                              • WarmWash 21 hours ago
                                With current market penetration and usage, LLMs can cost $60-$80/mo and provide an ROI in a 5-7yr time frame. There can also be ad-supported and ad-subsidized plans to lower the hard end user cost, but the target number is "cell phone plan" level monthly cost.
                        • _alternator_ 1 day ago
                          The article strongly implies they have the (Python) source code, and that it looks LLM generated. I don't know about you, but I can usually tell LLM code from a mile away.
                          • adrian_b 1 day ago
                            That can prove only a half of that sentence, that an AI coding assistant was used for writing the exploit (a.k.a. "weaponization").

                            For the other half of the sentence ("discovery"), one could claim that it is true only if the identity of the attackers were discovered and evidence about their prior activities would be gathered.

                            Even if it is likely that today anyone who searches for bugs would also use AI agents to accelerate that, I find unacceptable in announcements like that of Google the use of careless sentences that are obviously either false or they might be true only if Google knew something else that they do not disclose.

                            • _alternator_ 20 hours ago
                              The key here is "high confidence" and "likely". For threat intelligence research, these words usually map to a probability estimate. In this case, "likely" would mean 55-80% probability [0].

                              The fact that they are holding some information back doesn't really strike me as unreasonable. The bug has yet to be released, so they should not provide identifying details. Let's see what happens with the CVE.

                              [0] https://www.cisecurity.org/ms-isac/services/words-of-estimat...

                          • neya 1 day ago
                            We are going to be seeing a lot of these moving forward. It's the easy way out. If you've worked with Google, you will know that it's an environment where accountability doesn't thrive. You will find people who know nothing about Google's product portfolio hold advisory roles around the products. They don't care, there's no one to even question them. They just know to make colourful graphs with the most useless metrics to justify they "add value" to the company. Expecting them to take accountability is like trying to mix oil and water.
                            • glenstein 1 day ago
                              The article says it included excessive explainer text. And I'm almost positive an earlier version of the article referenced hallucinated library references though I don't see it in the present version of the article.
                              • Humans can sometimes find a needle in a haystack, but its impossible for us to find multiple needles in multiple haystacks and chain them together into an attack. AIs can work through a complex search space much more efficiently, that's the tell.
                                • skeptic_ai 1 day ago
                                  They did it before AI.
                                  • dijksterhuis 1 day ago
                                    sorry but that’s just wrong. it’s not impossible in the slightest. i built an attack against mozilla deepspeech in my phd from multiple needles (two of which i personally discovered).

                                    did it take a lot of effort? sure. lots of dead ends. but that does not mean it is impossible.

                                    • red-iron-pine 21 hours ago
                                      all fair points, but this 'splot could have been a team of two operating over a couple of days, as opposed to a multi-year Phd level effort.

                                      that's the scary part. not that super-expert Phd folks can eventually do it with serious effort, but that AI can do it faster and while guided by a plucky college freshman

                                    • close04 1 day ago
                                      > its impossible for us to find multiple needles in multiple haystacks and chain them together

                                      Except "we" have been successfully chaining attacks long before AI started automating it. AI doesn't make any of this possible, it just takes the drudgery out of it and lowers the cost of an attack.

                                    • eatsyourtacos 1 day ago
                                      Maybe after they realized how they were vulnerable they asked an LLM to find the exploit through a similar means to try and replicate it. Still doesn't prove it but maybe gives them confidence this weird thing can only really be found that way etc.
                                      • slater 1 day ago
                                        > I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?

                                        Excessive use of em-dashes, and emoji bullet points in the readme

                                        • yacthing 1 day ago
                                          Maybe they saw traffic that looked like AI prodding an API and quickly adapting to find the bug?

                                          But at this point I feel like odds are everyone looking for vulnerabilities is using AI to some extent. Why wouldn't they? It'd be stranger if they didn't.

                                        • nullc 1 day ago
                                          Presumably the attacker used Google's own LLM and they searched the history of all user chats to find the transcript.

                                          I say this only slightly in jest, as that's about the only thing I can think of which would legitimately give them 'high confidence'.

                                          • djeastm 1 day ago
                                            In the article (AP one, at least) Google explicitly said it does not believe it was Gemini or Mythos.
                                            • bmelton 1 day ago
                                              Clearly that's because they searched the history of all chats and didn't find the perpetrator
                                              • HDBaseT 1 day ago
                                                I know we're talking about Google here, but the privacy violations and concerns from this sort of search are massive.

                                                We need local AI ASAP.

                                                • gchamonlive 1 day ago
                                                  Don't get me wrong, I'm with you here, but we are back to the days when we had to rent mainframe time for compiling programs. Not because of software limitations, but you just didn't have consumer grade hardware capable of running them.

                                                  This time, however it's even worse, because it'll be a really long time until either we get consumer GPUs with enough VRAM for full models or LLMs that fit in 16-32GB capable enough to compete with cloud providers.

                                                  I run locally qwen3.6 27b on my 3090 and it's really impressive for what it is, but it is still generations away from being capable of delivering a level of quality that we can confidently default to solo drive them on a daily basis.

                                                  • overfeed 1 day ago
                                                    > We need local AI ASAP.

                                                    That is an excellent idea, once we, the GPU-poor mice, figure out who is going to bell the SoTA training cat. Chinese models being banned is well within the realms of lobbied possibilities.

                                                  • BobbyTables2 1 day ago
                                                    They probably used AI for the search.

                                                    The real game would be to put a “nothing of interest here” prompt injection attack in the original series of prompts so a LLM parsing them later would ignore the attackers’ session.

                                                  • ipaddr 1 day ago
                                                    So its a provider but not these two which imples OpenAI
                                              • koiueo 1 day ago
                                                Haven't read the article, but let me guess:

                                                "That's why for your safety we need a scan of your ID and your biometrics to let you use our best models"

                                                • andai 1 day ago
                                                  My Android phone takes a photo of my face every time I unlock the device. I don't have access to those images, but someone already has photos of my eyeballs!

                                                  I'm not sure why or how to turn it off, does anyone know?

                                                  (Also, insert weary photo of Kaczynski here.)

                                                  • jeroenhd 1 day ago
                                                    Go to security settings and disable face unlock. If you want to be extra safe against Google, go to the advanced security settings, find the "trust agents", and disable the ability for Google Play Services to unlock your phone. That'll also kill any other unlock mechanisms you may have forgotten about tied to Google's services.

                                                    If unlock features remain after that, it's a manufacturer feature that's been set up. In that case you'll have to look for a guide for your specific brand and model.

                                                    Your phone can't turn this on by itself, if it's doing face recognition that means you set it up at some point.

                                                    • andai 23 hours ago
                                                      Thanks. What I meant was when I unlock my phone, there's a little "camera is being used" icon that pops up momentarily. After the unlock. I don't use face unlock. I thought it was an anti theft thing, but that's disabled too.
                                                      • For the extra paranoid, tape over the camera is the way to go.
                                                        • ramon156 1 day ago
                                                          I would like a slider similar to my Thinkpad
                                                          • fsflover 18 hours ago
                                                            My phone has a hardware kill switch for that.
                                                      • surajrmal 20 hours ago
                                                        If you've ever actually looked into the implementation details of how face unlock works, you might feel a bit less spooked. Android and iOS go a very long way to isolate biometrics from the rest of the system. It's good to be skeptical, but it's also worthwhile to do some research.
                                                        • andai 17 hours ago
                                                          Man, I just wanna grill...
                                                        • fennecfoxy 1 day ago
                                                          I mean it's nice that someone helped you but if you're incapable of turning such a setting off yourself, or doing some basic research to find out how to turn it off - surely you'd feel threatened by the numerous other features of the phone that you're likely unaware of?

                                                          It's like willingly walking through a minefield.

                                                          • andai 23 hours ago
                                                            I do. Where's my location going, for example? Is there a centralized way to check everyone who accessed it?
                                                            • fennecfoxy 1 hour ago
                                                              I quick Google search reveals for my Samsung S24 Ultra: settings->security & privacy->Permissions used in last 24 hours->location->historical log: Google (background), Google Home (while using app), Maps (background), etc.
                                                          • TiredOfLife 19 hours ago
                                                            > I don't have access to those images, but someone already has photos of my eyeballs!

                                                            Citation needed.

                                                          • whynotmaybe 22 hours ago
                                                            How do they handle twins?
                                                            • alternatex 21 hours ago
                                                              Not sure if it's a joke, but twins don't have the same biometrics
                                                              • whynotmaybe 16 hours ago
                                                                Well I stand corrected.

                                                                I thought that the fingerprints/iris in twins where identical like the DNA.

                                                            • Phemist 1 day ago
                                                              [dead]
                                                            • netdevphoenix 1 day ago
                                                              I wonder what is the goal here? If Google Search was used to find a major software flaw would this be reported in this way? Between Mythos, OpenAI's Mythos equivalent, it's not clear if there is some interest to keep the "AI is powerful" trend going or they are trying to indirectly bring attention to the technical capabilities of LLMs in cybersecurity (as a potentially untapped source of revenue).
                                                              • bell-cot 22 hours ago
                                                                They're proclaiming that AI is the Latest Big Thing in the perpetual computer security arms race. So unless you want to be stuck fighting in propeller-driven planes when every real air force has jets, you better start spending big on AI. Preferably Google's AI, of course.
                                                                • surajrmal 20 hours ago
                                                                  I can assure you that Google is not coordinated enough to pull off this sort of thing. Security researchers only care about security research. There might be some level of oversight on what's published, but the core security teams are not actively seeking a way to promote a particular narrative.
                                                              • zx8080 1 day ago
                                                                It's the narrative "For your own security in the internet (and children's safety), show us your ID now, please".

                                                                Tired of this trend.

                                                                • rudolph9 19 hours ago
                                                                  Do they have high confidence the actor used a keyboard? Used the bathroom at some point during the attack? Has a mother?

                                                                  Idk, this doesn’t strike me as news. Google just missed a vulnerability.

                                                                  • vasco 18 hours ago
                                                                    Yeah anyone doing any sort of development is using AI, including for exploits. Means nothing.
                                                                  • QuantumNoodle 1 day ago
                                                                    Okay, when fuzzing techniques came out there was a big surge in discovered and exploited bugs. AI is more general and I expect there be a similar surge. However fuzzing is cheap but compute and techniques can be "owned." The economics of AI is unless you pay for it, it is difficult to self host (expensive hardware, open source models are catching up).

                                                                    State actors + hackers will have more resources to make better offense. What worse, in my experience AI produced code is blind to overall system behavior. So I fear the exploits will be either low hanging/trivial to exploit errors or bigger system level bugs.

                                                                    • s3p 1 day ago
                                                                      >But new A.I. models like Anthropic’s Mythos, which was announced last month, appear to be so good at finding such holes that Anthropic shared it only with a limited number of firms and government agencies in the United States and Britain.

                                                                      Immediate distrust of the article. GPT 5.5 is out with nearly the same capability. The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems. For all we know this group could have had a model examine some obscure line of code thousands of times until it found something.

                                                                      • GPT 5.5 does not have the same capabilities as Mythos. There is a separate 5.5-Cyber model which is the Mythos “equivalent”, but it is similarly restricted access like Mythos. Per OpenAI, the major difference is the built-in safeguards that 5.5 (and other models have), where 5.5-Cyber does not have these safeguards and is more “permissive” for security work.

                                                                        See https://openai.com/index/gpt-5-5-with-trusted-access-for-cyb...

                                                                        • ofjcihen 1 day ago
                                                                          I have access to the Cyber version. It’s great at cybersecurity work but only marginally better than its predecessor with the right jailbreaking.

                                                                          I imagine Mythos is going to be the same story from what I’ve seen so far.

                                                                        • nullstyle 1 day ago
                                                                          That reminds me:

                                                                          I got cajoled the other day that I need to upload my ID and ask for 5.5-Cyber access by the Codex desktop app while I was having it develop a fuzzing suite for an open source library I'm(we?) are developing. I was able to berate it into getting back to work.

                                                                          This struck me as a point of emergent enshittification; an anus if you will.

                                                                          • vgalin 1 day ago
                                                                            The company doing the actual ID verification (KYC) is probably the last company I'd trust with this kind of data.

                                                                            To circumvent conversations being flagged as "cybersecurity bad!!!" I often have to use previous models (5.3 for example, and sometimes using them through subagents is enough). And when this method no longer works, local models will be good enough for it to not be a problem (for my use case, at least).

                                                                        • bluGill 1 day ago
                                                                          That is very clearly the claim of mythos though. The experience of projects that do have access to mythos though suggests that if you use the other models it's not going to find much of anything. Which is to say generally we believe it is marketing as you say however the claim that the reporter said is very clearly stated even if it's not right.
                                                                          • xorgun 1 day ago
                                                                            [dead]
                                                                            • reaperducer 1 day ago
                                                                              Immediate distrust of the article… The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems.

                                                                              https://www.nytimes.com/by/dustin-volz

                                                                              > I am based in The Times’s Washington bureau, and much of my focus is on the dealings of U.S. cybersecurity and intelligence agencies, including the National Security Agency, Central Intelligence Agency, Cybersecurity and Infrastructure Security Agency and the Federal Bureau of Investigation, as well as their counterparts abroad, chiefly in China, Russia, Iran and North Korea.

                                                                              > My remit spans nation-state hacking conflict, digital espionage, online influence operations, election meddling, government surveillance, malicious use of A.I. tools and other related topics.

                                                                              > Before joining The Times, I worked at The Wall Street Journal, where I spent eight years covering cyber conflict and intelligence. My recent work at The Journal included a series of articles revealing a major Chinese intrusion of America’s telecommunications networks that breached the F.B.I.’s wiretap systems and has been described as one of the worst U.S. counterintelligence failures in history. I have also worked at Reuters and National Journal, where I began my career in Washington chronicling congressional efforts to reform surveillance practices at the N.S.A. in the wake of the 2013 Edward Snowden disclosures.

                                                                              > My work has been internationally recognized, including by the White House Correspondents’ Association, the Gerald Loeb Awards, the Society of Publishers in Asia and the Society for Advancing Business Editing and Writing.

                                                                              What have you done lately?

                                                                              • kubik369 1 day ago
                                                                                Your comment was surely well meant, but you could have plainly stated that the article author is a seasoned reporter instead of the snarky reply.

                                                                                GP might be incorrect in stating that the author is parroting Anthropic's marketing, but the author certainly does not go out of his way to specify that these are only Anthropic's claims. It is actually a bit ironic as the article linked[0] from the quoted part (by another author) uses the correct phrasing when dealing with such claims:

                                                                                > Anthropic, the artificial intelligence company that recently fought the Pentagon over the use of its technology, has built a new A.I. model that it claims is too powerful to be released to the public.

                                                                                [0] https://archive.ph/GC6WP#selection-4713.0-4713.200

                                                                                • LPisGood 1 day ago
                                                                                  > What have you done lately?

                                                                                  I feel like this website is a particularly dangerous place to ask that and hope it to be a “mic drop” moment. There are a lot of highly accomplished engineers, scientists, founders CEOs, etc. here that could easily respond to that with any manner of impressive qualifications.

                                                                                • ozozozd 1 day ago
                                                                                  Lately I’ve been trying to think critically. I am not perfect, but I can recognize appeal to authority from a mile away.

                                                                                  > An argument from authority (Latin: argumentum ab auctoritate, also called an appeal to authority, or argumentum ad verecundiam) is a form of argument in which the opinion of an authority figure (or figures) is used as evidence to support an argument. The argument from authority is often considered a logical fallacy and obtaining knowledge in this way is fallible.

                                                                                  • ShinyLeftPad 1 day ago
                                                                                    > there is disagreement on the general extent to which it is fallible - historically, opinion on the appeal to authority has been divided: it is listed as a non-fallacious argument as often as a fallacious argument

                                                                                    > Some consider it a practical and sound way of obtaining knowledge that is generally likely to be correct when the authority is real, pertinent, and universally accepted

                                                                                    Anyway, other than trying to think critically, anything?

                                                                                  • Reporting on such stuff requires networking skills, not technical knowledge.
                                                                                    • reaperducer 1 day ago
                                                                                      Reporting on such stuff requires networking skills, not technical knowledge.

                                                                                      Guess how I know you've never been a reporter.

                                                                                    • ofjcihen 1 day ago
                                                                                      Okay, well I’ve done more than that and I say he’s right. Now what?
                                                                                      • crazygringo 1 day ago
                                                                                        Your comment would be be fine without the snarky final sentence.
                                                                                        • himata4113 1 day ago
                                                                                          nytimes reporters have recently been very disappoiting and starting to feel like they're people who managed to become relevant long time ago, but haven't kept up with recent changes and are just parroting things others have said instead of unique thoughts.
                                                                                          • anjel 1 day ago
                                                                                            I found their recent investigative article on How do stars pee at the Met Gala? to be hard-hitting, yet fair to all sides. [1]

                                                                                            [1] https://archive.is/x9MSO

                                                                                            (You thought I was exaggerating about it being "investigative," dincha.)

                                                                                            • Conscat 1 day ago
                                                                                              Any media company which deliberately rids itself of everyone willing to speak vaguely positively of transsexual people may not be attracting the most free thinking writers.
                                                                                            • flextheruler 1 day ago
                                                                                              • reaperducer 1 day ago
                                                                                                Not at all.

                                                                                                OP posited that the author didn't know what he's talking about. I pointed out that the author has far more knowledge and experience in the field than rando internet griefers on HN who immediately reach for "shoot the messenger" when they read something that doesn't neatly fit into their pre-conceived worldview, instead of perhaps learning things from other people.

                                                                                                But at least your trope acknowledges that he's an authority on the subject.

                                                                                                • nitwit005 1 day ago
                                                                                                  > I pointed out that the author has far more knowledge and experience in the field than rando internet griefers on HN

                                                                                                  You mean, you guessed that a random person online lacked experience. The experts are genuinely here too.

                                                                                                  • ssl-3 1 day ago
                                                                                                    > OP posited that the author didn't know what he's talking about.

                                                                                                    That position does not appear to be present.

                                                                                                    • Eh, "unable to discern" seems like a polite way of saying someone is talking out of their ass.
                                                                                                • megous 1 day ago
                                                                                                  How many zeroday vulns had the article author discovered using AI assisted methods?
                                                                                              • giancarlostoro 17 hours ago
                                                                                                This will only keep happening, I know some people are skeptical as to what level of function the AI was used, whether it was to write code, or to do the hack, it really doesn't matter, the fact that anyone can use AI to do this, hell people nuke their prod systems with AI, should have every company worth its salt investing in security audits, code scanning and anything they can to find exploits before some 14 year old somehow breaks into your system with AI and wreaks chaos over your infrastructure.

                                                                                                Are you one bad headline away from a major hack? Or worse, one hack away from your company going under? It's all a ticking time bomb.

                                                                                                Someone else on HN pointed out that distros like Debian might be too slow as people find live exploits in the kernel, it might not be worth keeping something like that, on the other hand Ubuntu supports live kernel upgrading at the enterprise level, so maybe Ubuntu Server might be Debian's indirect saving grace.

                                                                                                Article says that it was largely a theory until now. That's not entirely true, we know that hackers used Claude to hack the Mexican government, got the PII of every citizen basically. I would not be surprised if there's more hacks that are undetected. The hackers don't need to declare their use of AI, its irrelevant.

                                                                                                • gman2093 1 day ago
                                                                                                  Black hat hacking seems to be a well-fit use case for these LLMs. Attackers only need to be right once, so the sometimes-wrongness of the attacks might be trivial. This probably devalues stashes of zero-day exploits for those that have been witholding them.
                                                                                                  • fwbruno 21 hours ago
                                                                                                    I do not personally hoard these exploits. My personal experience has been that responsible disclosure already has little to no economic incentive. I have gone through the pain of rigorously documenting and disclosing zero-day exploits through the official channel, and the vendor categorized it as Won't Fix, Intended Behavior. I feel that AI discovery devalues these disclosures even more because these bugs can now be discovered independently before anyone can act on them.
                                                                                                    • t-writescode 1 day ago
                                                                                                      This stance doesn't make sense. They have the same access that the rest of the public does; and, any Red Team member is going to be doing the exact same thing.
                                                                                                      • BLKNSLVR 1 day ago
                                                                                                        I wonder if that means we're going to see an increase in the attempted 'leveraging' of hoarded zero days lest they get publicised and patched prior to being profitable.
                                                                                                      • bouncycastle 1 day ago
                                                                                                        Meanwhile, I cannot ask ChatGTP how to pick my own lock. Even though this information is available in a book in the library.
                                                                                                        • dryarzeg 1 day ago
                                                                                                          Then go ask some ChineseGPT about this, I guess, as these models seem to be much less restricted on such topics (you could even get some explosives recipes, though not all of them are real and safe) /j
                                                                                                          • esseph 1 day ago
                                                                                                            Also available to Fed Gov entities, surely.

                                                                                                            For me, not thee

                                                                                                            • userbinator 1 day ago
                                                                                                              ...or on YouTube.
                                                                                                            • atrocities 1 day ago
                                                                                                              Can we link to the actual google article, instead of these editorialized articles about the article?

                                                                                                              https://cloud.google.com/blog/topics/threat-intelligence/ai-...

                                                                                                              • pbrumm 20 hours ago
                                                                                                                So google is accusing an AI company that has a customer who is paying them money to develop software that caused damages. Sounds like the AI company could be liable?

                                                                                                                If I am paid by someone to create an exploit that caused damages wouldn't I be liable? Or could I avoid it by making my client sign a terms of service agreement to not use it that way?

                                                                                                                Who created the model and who helped with GPU power to run the model to create the export and should they be doing more.

                                                                                                                • nomilk 1 day ago
                                                                                                                  @dang would be great if the hn link was the 'unlocked' version i.e. instead of

                                                                                                                  https://www.nytimes.com/2026/05/11/us/politics/google-hacker...

                                                                                                                  this instead

                                                                                                                  https://www.nytimes.com/2026/05/11/us/politics/google-hacker...

                                                                                                                  (can read the article immediately; slightly less fuss)

                                                                                                                  • wasabi991011 1 day ago
                                                                                                                    Just fyi @username does not send any notifications on hackernews, not even to the mods.

                                                                                                                    To contact the HN mods, you need to send them an email.

                                                                                                                    • randyrand 1 day ago
                                                                                                                      At least, thats what we're told ;)
                                                                                                                      • chrononaut 1 day ago
                                                                                                                        and I imagine out of anyone on HN, dang probably frequently searches for instances of dang. Sorry dang.
                                                                                                                        • latexr 1 day ago
                                                                                                                          I can confirm the moderators (dang and tomhow) are very responsive by email.
                                                                                                                    • Next headline: Google will not be releasing their next AI model to the public but only "trusted" partners, because it's too dangerous.
                                                                                                                      • To make an omelette, some eggs need to break, right? These companies released AI to the public and thought it will be all sunshine and roses.. there are legit bad actors in the world that hates society and people and they will use AI for expand on that, is that not clear? We need controls on AI similar to any other restricted materials (like nuclear stuff).
                                                                                                                        • andai 1 day ago
                                                                                                                          Local models are getting good scary fast. Hardware is improving too. How long until I can ask a local model to help me do Nontrivial Bad Things?

                                                                                                                          I don't see how you can regulate that though. Just making it illegal to release small models? Or to use unauthorized ones? (I'm kind of not sure the kind of people who want to do bad things are going to be discouraged by such a law though.)

                                                                                                                        • srcreigh 1 day ago
                                                                                                                          > Google said in research published Monday

                                                                                                                          What research? Where is it published?

                                                                                                                          • viktorcode 1 day ago
                                                                                                                            I expect that only to escalate with time, especially when there'll be more agent-written code deployed.
                                                                                                                            • Spacemolte 1 day ago
                                                                                                                              Phasing like this immediately makes me wonder what google is lobbying for..
                                                                                                                              • nsoonhui 1 day ago
                                                                                                                                There was a discussion a few days ago on White House considers vetting AI models prior to release (https://news.ycombinator.com/item?id=48013608).
                                                                                                                                • skeledrew 1 day ago
                                                                                                                                  Wild that they think restricting access to models will help much. Access to Chinese models will definitely not be restricted and have enough capability to find exploits as well.
                                                                                                                                  • Jean-Papoulos 1 day ago
                                                                                                                                    If this is true, I hope AI exploit-finding will force the industry to harden itself against supply-chain vulnerabilities.
                                                                                                                                    • markboo 1 day ago
                                                                                                                                      In past decades the "firewall" of software is that advanced security and coding knowledge is not very easy to access by anyone, only a few smartest people in the big name companies and top orgs. But nowadays, knowledge is accessible to everyone if you use top LLM, which swipe the difference. I would say that future public software is unsafe anymore. maybe the concept of public software (like SaaS or other) will be dead, software is only private instead of public
                                                                                                                                      • stikit 21 hours ago
                                                                                                                                        Hackers use AI to find vulnerabilities to exploit. What’s the news here?
                                                                                                                                        • xnx 1 day ago
                                                                                                                                          • skeledrew 1 day ago
                                                                                                                                            This is 3 hours earlier than what you're sharing.
                                                                                                                                            • xnx 1 day ago
                                                                                                                                              Not sure how article merging goes, but this one shows up as 4 hours later to me.
                                                                                                                                          • sowbug 1 day ago
                                                                                                                                            Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies.
                                                                                                                                            • > Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies

                                                                                                                                              Unlikely in America or China. This is not a game either can singularly control, and locking down the R&D means conceding momentum to the party that doesn't. Which means use restrictions will be contained to countries satisfied with playing second fiddle.

                                                                                                                                              Instead, I suspect we'll see momentum towards running software on publisher-controlled servers so the source code can be secured through obscurity. It isn't perfect. But it might be good enough to get us through this transition.

                                                                                                                                              • ls612 1 day ago
                                                                                                                                                If America just banned all chinese models that would wipe out most of the open weights landscape in AI, especially anything close to the frontier. I could easily see that happening if a Mythos tier model comes out of a Chinese lab in early 2027. It doesn't meaningfully change the research competition between OAI/Anthropic/Google/SpaceX but it does pad all of their pockets by removing cheap competition and it gives the government far greater control over AI usage de facto.
                                                                                                                                                • > I could easily see that happening if a Mythos tier model comes out of a Chinese lab in early 2027

                                                                                                                                                  I don't. I'm not saying American politics isn't capable of doing it. But I don't see us being stupid enough to try locking ourselves out of a technology that everyone else has access to.

                                                                                                                                                  • lazide 1 day ago
                                                                                                                                                    Did you not see the foreign drone parts bans?
                                                                                                                                                    • ls612 1 day ago
                                                                                                                                                      But we wouldn’t be. I’m assuming that the US labs retain several months’ lead for at least the next couple of years.
                                                                                                                                                    • UltraSane 1 day ago
                                                                                                                                                      How would it be possible to ban Chinese LLMs?
                                                                                                                                                      • ls612 1 day ago
                                                                                                                                                        Place the chinese labs on the entities list. That stops any legitimate company using them and probably makes HF take them down. Sure there will be torrents but the laws for doing business with a sanctioned entity bite much harder than the laws around copyright infringement.
                                                                                                                                                        • > Place the chinese labs on the entities list

                                                                                                                                                          Ironically, this–a nascent industry and budding industrial cluster–is the textbook case for deploying tariffs. America tariffs American use of Chinese models and pays that back as a tax credit to American developers.

                                                                                                                                                  • kshacker 1 day ago
                                                                                                                                                    As long as it is within the country, restriction works. How do you restrict the capability from a foreign entity, especially a hostile one?
                                                                                                                                                    • jazzyjackson 1 day ago
                                                                                                                                                      netsplit, I guess. decide that the risk of an open network is too great and simply block all routing out of the country through the ISPs and consider the political power that goes along with a global satellite constellation under rule of a single, government-aligned corporation.
                                                                                                                                                      • notsound 1 day ago
                                                                                                                                                        "simply block all routing out of the country" is doing a lot of heavy lifting. For government networks, sure. For civilian networks? It's a bit like stopping pirates from ripping video; how do you deal with an attacker that ultimately can gain some form of access? Even in North Korea external media can be smuggled in.
                                                                                                                                                        • bluGill 1 day ago
                                                                                                                                                          That works for very oppressive countries. However, more freedom-minded countries are not going to law for that.
                                                                                                                                                      • Didnt work out so well with the cypherpunk technology so there is hope
                                                                                                                                                        • If they tried to lock down local models more people would use them. They would also have to take down a few us companies in the process who would go down fighting for certain.
                                                                                                                                                        • CrzyLngPwd 1 day ago
                                                                                                                                                          People used LLMs to find flaws in Google software.
                                                                                                                                                          • adrianmonk 1 day ago
                                                                                                                                                            If you're talking about the incident described in the article, it says it was a flaw in "a popular open-source, web-based system administration tool".

                                                                                                                                                            Google's blog (https://cloud.google.com/blog/topics/threat-intelligence/ai-...) says Google "worked with the impacted vendor to responsibly disclose this vulnerability", so in this incident, it's not Google software.

                                                                                                                                                            • amelius 1 day ago
                                                                                                                                                              But did they use Gemini?
                                                                                                                                                              • Andrex 1 day ago
                                                                                                                                                                > the company added that it did not believe it was its own Gemini chatbot.

                                                                                                                                                                -TFA

                                                                                                                                                                • freedomben 1 day ago
                                                                                                                                                                  I don't know, but given how often Gemini refuses benign requests IME, I would suspect it's a complete non-starter for finding security holes.
                                                                                                                                                                  • amelius 3 hours ago
                                                                                                                                                                    But of course you phrase them from the other viewpoint, e.g. "I want to make this code more secure, where should I start?"
                                                                                                                                                              • wnc3141 1 day ago
                                                                                                                                                                But in exchange we get to also waste vast energy and carbon while depleting job prospects for just about any college grad.
                                                                                                                                                                • andrepd 1 day ago
                                                                                                                                                                  It's not all bad though. We also managed to turn the Information Superhighway of the 1990s into the Slop Wasteland of the 2020s.
                                                                                                                                                                • kuboble 1 day ago
                                                                                                                                                                  Given how everywhere software is now being written by the LLMs, how is that a top headline news that some (albeit malicious) software is being written with LLM?

                                                                                                                                                                  The robbers used a CAR in the robbery.

                                                                                                                                                                  The blackmailer used a TYPEWRITER to write blackmailing letter.

                                                                                                                                                                  • plexescor 1 day ago
                                                                                                                                                                    But which AI exactly, theres this new claude Mythos about wihch everone is talking, is it legit or a fluff
                                                                                                                                                                    • mikewarot 19 hours ago
                                                                                                                                                                      This is your reminder that the security model underlying everything these days is crap. Ambient authority was fine for stand alone PCs without persistent internal storage. It's just stupid to use it in the 21st century.
                                                                                                                                                                      • Source: https://cloud.google.com/blog/topics/threat-intelligence/ai-... (https://news.ycombinator.com/item?id=48096712)

                                                                                                                                                                        Why collect all the news dupes but not the source up top OP? Because the source was already submitted?

                                                                                                                                                                        • skywhopper 1 day ago
                                                                                                                                                                          Drives me nuts that the NYT just uncritically cites Anthropic’s unverified claims of “thousands of zero-days” without a hint of skepticism.
                                                                                                                                                                          • SecretDreams 1 day ago
                                                                                                                                                                            If "bad guy AI" can find flaws, can "good guy AI" patch them faster when backed by trillion dollar companies?
                                                                                                                                                                            • boothby 1 day ago
                                                                                                                                                                              Do your AI patches introduce fewer flaws than they repair?
                                                                                                                                                                            • ccimmergreen 1 day ago
                                                                                                                                                                              "Google used AI to find a major software flaw" — there, fixed it for you, happy?
                                                                                                                                                                              • j2kun 1 day ago
                                                                                                                                                                                The bottleneck is probably validating and deploying the fix, which requires coordination.
                                                                                                                                                                                • cyanydeez 1 day ago
                                                                                                                                                                                  If I sell weapons to both sides of a conflict, can I become rich?
                                                                                                                                                                                  • mindcrime 1 day ago
                                                                                                                                                                                    No. To become really rich you have to draw a 3rd player into the conflict, and then sell weapons to them as well.
                                                                                                                                                                                    • dwd 1 day ago
                                                                                                                                                                                      Or just lend money to both parties to fund their war efforts and pay off war debts afterwards.
                                                                                                                                                                                    • BLKNSLVR 1 day ago
                                                                                                                                                                                      Yes.

                                                                                                                                                                                      Please refer to any seller of weapons ever.

                                                                                                                                                                                      • SecretDreams 1 day ago
                                                                                                                                                                                        Ask anyone selling AI hardware recently!
                                                                                                                                                                                    • justsomedev2 1 day ago
                                                                                                                                                                                      What a surprise hackers used AI . I mean why wouldnt they? Every programmer uses it..
                                                                                                                                                                                      • rullelito 22 hours ago
                                                                                                                                                                                        "And the only prescription..'
                                                                                                                                                                                        • 0xWTF 1 day ago
                                                                                                                                                                                          Wait until the bio version of this shows up.
                                                                                                                                                                                        • lynx97 1 day ago
                                                                                                                                                                                          I stopped reading after "Google says". They have destroyed whatever trust I might have had in them years ago.
                                                                                                                                                                                          • ppqqrr 1 day ago
                                                                                                                                                                                            ...says yet another company hell bent on integrating it into every facet of our lives. This reads like a celebration, if you ask me.
                                                                                                                                                                                            • luisb_24 18 hours ago
                                                                                                                                                                                              [flagged]
                                                                                                                                                                                              • _karie_ 1 day ago
                                                                                                                                                                                                [dead]
                                                                                                                                                                                                • huflungdung 1 day ago
                                                                                                                                                                                                  [dead]
                                                                                                                                                                                                  • Predaxia 1 day ago
                                                                                                                                                                                                    [flagged]
                                                                                                                                                                                                  • 4128-1228 1 day ago
                                                                                                                                                                                                    The Google Threat Intelligence Group wants to increase its relevance and casually point out the it was not Mythos which found the exploit!

                                                                                                                                                                                                    Security "researchers" are overpaid buffoons who hype things for their own salaries and their companies. And the stenographers from the press dutifully copy everything.

                                                                                                                                                                                                    This is a despicable game to fool politicians into giving money and favorable AI legislation.

                                                                                                                                                                                                    Strangely enough these buffoons never offer their models to open source developers. It is always a select group of highly paid other buffoons that throws some very occasional results over the wall.

                                                                                                                                                                                                    • simmerup 1 day ago
                                                                                                                                                                                                      Can google please use AI to find bugs then?

                                                                                                                                                                                                      Software is in such a state now, Gmail is full of bugs around sharing attachments to the position that I have to tell my dad to turn his phone off and on again in order to attach a document