35 comments

  • stingraycharles 18 hours ago
    I’m a bit confused by what you’re offering. Is it a voice assistant / AI as described on your GitHub? Or is it more general purpose / LLM ?

    How does the RAG fit in, a voice-to-RAG seems a bit random as a feature?

    I don’t mean to come across as dismissive, I’m genuinely confused as to what you’re offering.

    • shubham2802 14 hours ago
      RunAnywhere builds software that makes AI models run fast locally on devices instead of sending requests to the cloud.

      Right now, our focus is Apple Silicon.

      Today there are two parts:

      MetalRT - our proprietary inference engine for Apple Silicon. It speeds up local LLM, speech-to-text, and text-to-speech workloads. We’re expanding model coverage over time, with more modalities and broader support coming next.

      RCLI - our open-source CLI that shows this in practice. You can talk to your Mac, query local docs, and trigger actions, all fully on-device.

      So the simplest way to think about us is: we’re building the runtime / infrastructure layer for on-device AI, and RCLI is one example of what that enables.

      Longer term, we want to bring the same approach to more chips and device types, not just Apple Silicon.

      For people asking whether the speedups are real, we’ve published our benchmark methodology and results here: LLM: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e... Speech: https://www.runanywhere.ai/blog/metalrt-speech-fastest-stt-t...

      • mirekrusin 12 hours ago
        From LLM benchmarks it looks like it's better to use open source uzu than RunAnywhere's proprietary inference engine.

        [0] https://github.com/trymirai/uzu

        • sanchitmonga22 9 hours ago
          uzu is a strong engine, it beat us on Llama-3.2-3B (222 vs 184 tok/s) and we reported that honestly in our benchmarks.

          But looking at the full picture across all four models tested:

          Qwen3-0.6B: MetalRT 658, uzu 627

          Qwen3-4B: MetalRT 186, uzu 165

          Llama-3.2-3B: uzu 222, MetalRT 184

          LFM2.5-1.2B: MetalRT 570, uzu 550

          MetalRT wins 3 of 4. The bigger difference is that MetalRT also handles STT and TTS natively, uzu is LLM-only. For a voice pipeline where you need all three modalities running on one engine with shared memory management, that matters.

          That said, uzu is great open-source software and worth checking out if your looking for an OSS LLM-only engine on Apple Silicon.

        • concats 3 hours ago
          How does it compare for models of any meaningful size?

          These 0.6B-4B models are, frankly, just amusing curiosities. But commonly regarded as too error prone for any non-demo work.

          The reason why people are buying Apple Silicon today is because the unified memory allows them to run larger models that are cost prohibitive to run otherwise (usually requiring Nvidia server GPUs). It would be much more interesting to see benchmarks for things like Qwen3.5-122B-A10B, GLM-5, or any dense model is the 20b+ range. Thanks.

        • sanchitmonga22 9 hours ago
          Fair question, let me clarify.

          RunAnywhere is an inference company. We build the runtime layer for on-device AI.

          There are two pieces:

          MetalRT, a proprietary GPU inference engine for Apple Silicon. It runs LLMs, speech-to-text, and text-to-speech faster than anything else available (benchmarks: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...). This is our core product.

          RCLI, an open-source CLI (MIT) that demonstrates what MetalRT enables. It wires STT + LLM + TTS into a real voice pipeline with 43 macOS actions, local RAG, and a TUI. Think of it as the reference application built on top of the engine.

          On RAG specifically: voice + document Q&A is a natural pairing for on-device use cases. You have sensitive documents you don't want to upload to the cloud, you ingest them locally, and then ask questions by voice. The retrieval runs at ~4ms over 5K+ chunks, so it feels instant in the voice pipeline. Its not random, it's one of the strongest privacy arguments for running everything locally.

          The longer-term vision is bringing MetalRT to more chips and platforms, so any developer can get cloud-competitive inference on-device with minimal integration effort.

          • glitchc 17 hours ago
            From the TFA: Document Intelligence (RAG): Ingest docs, ask questions by voice — ~4ms hybrid retrieval.

            Seems pretty clear. You can supply documents to the model as input and then verbally ask questions about them.

            • drcongo 18 hours ago
              I came to the comments here to see if anyone had worked out what it is, so you're not alone.
            • vessenes 18 hours ago
              Just tried it. really cool, and a fun tech demo with rcli. I filed a bug report; not everything is loading properly when installed via homebrew.

              Quick request: unsloth quants; bit per bit usually better. Or more generally UI for huggingface model selections. I understand you won't be able to serve everything, but I want to mix and match!

              Also - grounding:

              "open safari" (safari opens, voice says: "I opened safari") "navigate to google.com in safari" (nothing happens, voice says: "I navigated to google.com")

              Anyway, really fun.

              • sanchitmonga22 9 hours ago
                Thanks for trying it and for filing the bug, we're looking into the homebrew install issue.

                On unsloth quants: agreed, they're consistently better bit-for-bit. Adding broader quantization format support (including unsloth's approach) is on the roadmap. Right now MetalRT works with MLX 4-bit files and GGUF Q4_K_M, we want to expand that.

                On the grounding issue ("navigate to google.com" not actually navigating): you're right, that's a gap. The "open_url" action exists but the LLM doesn't always route to it correctly, especially with compound commands. Small models (0.6B-1.2B) have limited tool-calling accuracy, upgrading to Qwen3.5 4B via rcli upgrade-llm helps significantly. We're also improving the action routing prompts.

                Appreciate the detailed feedback, this is exactly what we need.

                • blks 14 hours ago
                  > "open safari" (safari opens, voice says: "I opened safari") "navigate to google.com in safari" (nothing happens, voice says: "I navigated to google.com")

                  So you’re describing a core broken feature. Application breaking at easiest test.

                  • sanchitmonga22 9 hours ago
                    Fair criticism. The action executed on the LLM side but didn't translate to the correct macOS action, the model hallucinated success instead of routing to the open_url tool.

                    This is a known limitation with small LLMs (0.6B-1.2B) doing tool calling. They sometimes confuse "I know what you want" with "I did it." Upgrading to a larger model improves tool-calling accuracy significantly.

                    We're also working on verification, having the pipeline confirm the action actually succeeded before reporting back. Thats a fair expectation and we should meet it.

                  • Tacite 16 hours ago
                    How did you try it? You said on github it doesn't work.
                    • wlesieutre 16 hours ago
                      They said it didn't work installed from homebrew, so I assume they went back and did the curl | bash install option
                      • Tacite 16 hours ago
                        This option didn't work either. I tried it. Also, the install script… installs Brew. So at the end, it's the same?
                        • emmelaich 9 hours ago
                          Oh dear.

                              if ! command -v brew &>/dev/null; then
                                  info "Installing Homebrew..."
                                  /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
                                  eval "$(/opt/homebrew/bin/brew shellenv)"
                              fi
                        • RayVR 10 hours ago
                          That’s hilarious
                      • vessenes 15 hours ago
                        It loads after those errors. Tap space and talk to it.
                    • jonhohle 17 hours ago
                      If I send a Portfile patch, would you consider MacPorts distribution?
                      • halostatue 15 hours ago
                        You're welcome to add me as a co-maintainer on this if you submit it to macports/macports-ports:

                             {macports.halostatue.ca:austin @halostatue}
                        
                        I maintain https://github.com/macports/macports-ports/blob/master/sysut... amongst other things regularly.
                        • sanchitmonga22 9 hours ago
                          Absolutely, we'd welcome a Portfile contribution. Happy to review and merge. If halostatue wants to co-maintain, even better.

                          Feel free to open a PR or issue on the RCLI repo and we'll coordinate.

                          • AmanSwar 17 hours ago
                            yes please
                          • jonplackett 3 hours ago
                            Really thought this was called Meta IRT and assumed it was just Facebook spyware.
                            • mips_avatar 14 hours ago
                              Have you tried any really big models on a mac studio? I'm wondering what latency is like for big qwens if there's enough memory.
                              • asimovDev 2 hours ago
                                I am running 80b Qwen coder next 4bit quant MLX version on a 96GB M3 MacBook and it responds quickly, almost immediately. I can fit the model + 128k context comfortably into the memory
                                • sanchitmonga22 9 hours ago
                                  Not yet with MetalRT, right now we support models up to ~4B parameters (Qwen3 4B, Llama 3.2 3B, LFM2.5 1.2B). These are optimized for the voice pipeline use case where decode speed and latency matter more then model size.

                                  Expanding to larger models (7B, 14B, 32B) on machines with more unified memory is on the roadmap. The Mac Studio with 192GB would be an interesting target, a 32B model at 4-bit would fit comfortably and MetalRT's architectural advantages (fused kernels, minimal dispatch overhead) should scale well.

                                  What model / use case are you thinking about? That helps us prioritize.

                                  • mips_avatar 3 hours ago
                                    Well it’s just more that I’ve noticed in the agents I’ve built that qwen doesn’t get reliable until around 27b so unless you want to rl small qwen I don’t think I would get much useful help out of it.
                                • Reebz 7 hours ago
                                  Do you have plans to port your proprietary library MetalRT to mobile devices? These performance gains would be a boon for privacy-centric mobile applications.
                                  • sanchitmonga22 6 hours ago
                                    Yes, mobile is our primary offering and it is on the roadmap. The same Metal GPU pipeline that powers MetalRT on macOS maps directly to iOS (same Apple Silicon, same Metal API)
                                    • shubham2802 7 hours ago
                                      Yes.
                                    • rushingcreek 16 hours ago
                                      Very cool, congrats! I'm curious how you were able to achieve this given Apple's many undocumented APIs. Does it use private Neural Engine APIs or fully public Metal APIs?

                                      Either way, this is a tremendous achievement and it's extremely relevant in the OpenClaw world where I might not want to have sensitive information leave my computer.

                                      • sanchitmonga22 9 hours ago
                                        Fully public Metal APIs, no private frameworks, no Neural Engine, no undocumented entitlements.

                                        MetalRT is built on the public Metal API. The performance comes from how we use the GPU, not from accessing anything Apple doesn't document.

                                        We specifically chose to stay on public APIs so that MetalRT works on any Apple Silicon Mac without special entitlements or SIP workarounds. This also means its App Store compatible for future macOS/iOS distribution.

                                        The results speak for themselves: 1.1-1.19x faster than Apple's own MLX on identical model files, 4.6x faster on STT, 2.8x faster on TTS. Full methodology published here: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...

                                        Appreciate the kind words, the "OpenClaw world" framing is exactly why we built this.

                                      • brainless 2 hours ago
                                        I am interested in MetalRT. I am an indie builder, focused mostly on building products with LLM assistance that run locally. Like: https://github.com/brainless/dwata

                                        I would be interested if MetalRT can be used by other products, if you have some plans for open source products?

                                        • mnafees 11 hours ago
                                          Seems like you are leaking an ElevenLabs API key in your web demo. The OpenAI completions endpoint also has the API key in the request header but that seems to already be revoked and is returning a 401.
                                          • shubham2802 10 hours ago
                                            I am pretty sure we don't have balance. It's a bait :)
                                            • neya 9 hours ago
                                              Sorry, but, this is not really a confidence inspiring response. Accepting the mistake and fixing the leak altogether would have been the better way to handle this. This is a developer forum, we all make mistakes. Framing it as bait just sounds like bad PR management.

                                              How can we trust your product if you can't fulfil basic security 101? Not being harsh but this kind of lax response for a serious mistake is not acceptable to me. Imagine I recommend you to my company and you end up leaking out our credentials and respond with something like this.

                                              I might be picky here about this, but long term trust starts with accountability.

                                              All the best on your product launch and cheers.

                                              • shubham2802 6 hours ago
                                                my earlier reply was too glib. Even though the key had no usable balance, it still should not have been exposed. We’re removing it now and fixing the demo flow so this doesn’t happen again. Thanks for calling it out. Cheers!
                                                • neya 5 hours ago
                                                  No worries, like I said, we all make mistakes. Live and learn. All the best.
                                                  • word_saladist 6 hours ago
                                                    This is pretty far off from being an intelligible sentence. I wonder if it’s a symptom of people getting used to LLMs being able to parse intent and meaning from fragmentary, disjointed text such as this.
                                                  • shubham2802 7 hours ago
                                                    I see, sure will fix it asap. Again, thanks for feedback.
                                                    • gigatexal 6 hours ago
                                                      Yeah wow. These responses to constructive feedback show an immature team full of hubris. This whole thing is DOA to me. Thank you HN for showing me this.
                                              • shekhar101 13 hours ago
                                                Tried this and really liking it so far. Question - is there a diarization support in the tui app or any of the models MetalRt supports? Any plans to add it if not already supported?
                                              • shubham2802 12 hours ago
                                                It does tries to have some memory management done too - to remember previous context + some auto compact feature.

                                                Additionally, personality feature - try it out!! Super fun :)

                                                • brian-armstrong 9 hours ago
                                                  What kind of self-disrespecting dev is using MacOS in TYOOL 2026?
                                                  • JSR_FDED 6 hours ago
                                                    The ones who like using local LLMs

                                                    The ones who like top-notch hardware

                                                    The ones who build stuff and don’t make a religious issue out of everything

                                                    • ReaderOfRunes 7 hours ago
                                                      Unfortunately it's the only laptop some companies provide their developers
                                                    • Tacite 18 hours ago
                                                      Doesn't work. " zsh: segmentation fault rcli"
                                                      • esafak 17 hours ago
                                                        You could share your setup details, on GH if not here, to make it actionable.
                                                        • Tacite 16 hours ago
                                                          I did on Github. This looks vibecoded? EDIT: Dev is using Claude Code as stated in their github updates.
                                                          • sanchitmonga22 8 hours ago
                                                            We use AI tools in our workflow, same as a lot of teams at this point. The pipeline architecture, Metal integration, and engine design are ours. The code is MIT and open for anyone to read and judge the quality directly.
                                                      • tiku 17 hours ago
                                                        Personally I'm so disappointed about the state of local AI. Only old models run "decent" but decent is way to slow to be usable.
                                                        • sanchitmonga22 9 hours ago
                                                          This is exactly the problem we're trying to solve. The models themselves have gotten surprisingly capable at small sizes, Qwen3.5 4B with 262K context, LFM2 1.2B for fast tool calling, but the inference infrastructure hasn't kept up.

                                                          When people say "local AI is too slow," they usually mean the engine is too slow, not the model. A 4B model at 186 tok/s (MetalRT on M4 Max) feels genuinely responsive for interactive chat. The same model at 87 tok/s (llama.cpp) feels sluggish. Same weights, same quality, 2x the speed, that's a usability cliff.

                                                          We think the gap between cloud and on-device inference is a infrastructure problem, not a model problem. That's what we're working on.

                                                        • woadwarrior01 10 hours ago
                                                          > Apple M3 or later required. MetalRT uses Metal 3.1 GPU features available on M3, M3 Pro, M3 Max, M4, and later chips. M1/M2 support is coming soon. On M1/M2, RCLI automatically falls back to the open-source llama.cpp engine.

                                                          So, no support for M5 Neural Accelerators, eh? (Requires Metal 4) ¯\_(ツ)_/¯

                                                          • sanchitmonga22 9 hours ago
                                                            Ha, not yet. Metal 4 is interesting and we're keeping an eye on it.

                                                            MetalRT currently targets Metal 3.1 GPU compute because that's where we get the most control over the decode pipeline. Neural Engine / ANE is powerful for fixed-shape inference (vision, classification) but autoregressive LLM decode, where you're generating one token at a time with dynamic KV cache, doesn't map as cleanly to ANE today.

                                                            That said, if Metal 4 opens up new capabilities that help with sequential token generation or gives better programmable access to the neural accelerator, we'll absolutely look at it. The M5 will be a fun chip to benchmark on.

                                                            • woadwarrior01 3 hours ago
                                                              > Neural Engine / ANE is powerful for fixed-shape inference (vision, classification) but autoregressive LLM decode, where you're generating one token at a time with dynamic KV cache, doesn't map as cleanly to ANE today.

                                                              What does the ANE have to with this?

                                                              Neural Engine (ANE) and the M5 Neural Accelerator (NAX) are not the same thing. NAX can accelerate LLM prefill quite dramatically, although autoregressive decoding remains memory bandwidth bound.

                                                              I suspect the biggest blocker for Metal 4 adoption is the macOS Tahoe 26 requirement.

                                                          • alfanick 17 hours ago
                                                            I'm not looking for STT->AI->TTS, I'm looking for truly good voice-to-text experience* on Linux (and others). Siri/iOS-Dictation is truly good when it comes to understanding the speech. Something this level on Linux (and others) would be great, yeah always listening, maybe sending the data somewhere, but give me UX - hidden latency, optimizing for first chars recognized - a good (virtual) input device.
                                                            • coder543 17 hours ago
                                                              > Siri/iOS-Dictation is truly good when it comes to understanding the speech.

                                                              What...? It is terrible, even compared to Whisper Tiny, which was released years ago under an Apache 2.0 license so Apple could have adopted it instantly and integrated it into their devices. The bigger Whisper models are far better, and Parakeet TDT V2 (English) / V3 (Multilingual) are quite impressive and very fast.

                                                              I have no idea what would make someone say that iOS dictation is good at understanding speech... it is so bad.

                                                              For a company that talks so much about accessibility, it is baffling to me that Apple continues to ship such poor quality speech to text with their devices.

                                                              • derefr 17 hours ago
                                                                Maybe they have exactly the accent iOS dictation was trained to recognize.
                                                                • fragmede 15 hours ago
                                                                  Terrible? It's fine. What's your accent that it's terrible? It even pulls last names from my address book and spells them right.
                                                                  • coder543 15 hours ago
                                                                    Terrible relative to everything else that exists today. I have a neutral American accent.

                                                                    Maybe you just don’t know what you’re missing? Google’s default speech to text is still bad compared to Whisper and Parakeet, but even Google’s is markedly better than Apple’s.

                                                                    I cannot think of a single speech to text system that I’ve run into in the past 5 years that is less accurate than the one Apple ships.

                                                                    Sure, Apple’s speech to text is incredible compared to what was on the flip phone I had 20 years ago. Terrible is relative. Much better options exist today, and they’re under very permissive licenses. Apple’s refusal to offer a better, more accessible experience to their users is frustrating when they wouldn’t even have to pay a licensing fee to ship something better. Whisper was released under a permissive license nearly 4 years ago.

                                                                    Apple also restricts third party keyboards to an absurdly tiny amount of memory, so it isn’t even possible to ship a third party keyboard that provides more accurate on-device speech to text without janky workarounds (requiring the user to open the keyboard's own app first each time).

                                                                    • catlifeonmars 7 hours ago
                                                                      > I have a neutral American accent

                                                                      This is tangential but is _any_ accent objectively neutral?

                                                                      • coder543 7 hours ago
                                                                        Neutral here means not strongly identifiable as any particular regional American accent. Some people have very strong regional accents, some don’t. It is still clearly an American accent, not British or anything else.
                                                                      • CamJN 14 hours ago
                                                                        As someone who tried every TTS in existance a few years ago for some product work, Apple’s is so consistantly better that we wound up getting a bunch of apple stuff just for the TTS.
                                                                        • coder543 14 hours ago
                                                                          “A few years ago” sounds like it could be before the modern era of STT, as defined by when Whisper was released.

                                                                          Your comment says TTS, which is different from what I’m discussing, though, so there might be some confusion.

                                                                  • sanchitmonga22 9 hours ago
                                                                    Understood, you want dictation, not a chatbot. That's a valid and different use case.

                                                                    RCLI is Apple Silicon only today because MetalRT is built on Metal. For Linux, the closest thing to what you're describing would be building a virtual input device on top of Whisper or Parakeet (which RCLI supports as STT backends). Parakeet TDT 0.6B has ~1.9% WER, that's very close to production dictation quality.

                                                                    The missing piece on Linux isn't the model, it's the integration: a daemon that captures mic audio, runs STT with hidden latency (streaming partial results), and injects text as keyboard input. sherpa-onnx (https://github.com/k2-fsa/sherpa-onnx) supports Linux and has streaming STT, it might be the best starting point for what your after.

                                                                    We're focused on Apple Silicon for now but broader platform support is on the roadmap.

                                                                    • swindmill 17 hours ago
                                                                      Have you tried https://handy.computer ?
                                                                      • dajonker 5 hours ago
                                                                        I use voxtype on my Linux machine with parakeet. Super fast and regularly even gets the tech lingo correct. You can configure prompts and keywords to help with that as well.
                                                                        • fragmede 15 hours ago
                                                                          > I'm not looking for STT->AI->TTS, I'm looking for truly good voice-to-text experience

                                                                          Umm, ah, wait no, uhh yes you are. Unless, hang on, you are possessed with greater umm speech capabilities than most, wait nevermind start over. Unless you never make a mistake while talking, you want AI to take out the "three, wait no four" and just leave the output with "four" from what you actually spoke. Depending on your use case.

                                                                          • nostrebored 9 hours ago
                                                                            It’s the TTS layer that is weird. I’m in the same boat — speech out is just a much worse modality than text when possible.
                                                                            • sanchitmonga22 8 hours ago
                                                                              Agreed for a lot of use cases. RCLI supports text-only mode (--no-speak flag or just type in the TUI instead of using push-to-talk). TTS makes sense for hands-free / eyes-free scenarios, but we dont force it.
                                                                        • DetroitThrow 18 hours ago
                                                                          Wow, this is such a cool tool, and love the blog post. Latency is killer in the STT-LLM-TTS pipeline.

                                                                          Before I install, is there any telemetry enabled here or is this entirely local by default?

                                                                        • RationPhantoms 16 hours ago
                                                                          This doesn't work on any of the methods I've tried.
                                                                        • jaimex2 7 hours ago
                                                                          I don't have a Mac
                                                                          • computerex 17 hours ago
                                                                            Amazing, this is what I am trying to do with https://github.com/computerex/dlgo
                                                                            • sanchitmonga22 9 hours ago
                                                                              Cool, just checked out dlgo. Looks like you're targeting Go bindings for on-device inference? Different approach but same conviction that this should run locally. Happy to compare notes if you want to chat about Metal optimization or pipeline architecture.
                                                                            • tristor 18 hours ago
                                                                              > What would you build if on-device AI were genuinely as fast as cloud?

                                                                              I think this has to be the future for AI tools to really be truly useful. The things that are truly powerful are not general purpose models that have to run in the cloud, but specialized models that can run locally and on constrained hardware, so they can be embedded.

                                                                              I'd love to see this able to be added in-path as an audio passthrough device so you can add on-device native transcriptioning into any application that does audio, such as in video conferencing applications.

                                                                              • sanchitmonga22 9 hours ago
                                                                                This is a great idea. A virtual audio device that sits in the path of any audio stream and provides live transcription, that would be huge for video conferencing, lectures, podcasts.

                                                                                MetalRT's STT numbers make this feasible: 70 seconds of audio transcribed in 101ms means you could process audio chunks in real-time with massive headroom. The latency would be imperceptible.

                                                                                We haven't built this yet but it's a compelling use case. CoreAudio supports virtual audio devices (aggregate devices) that could pipe audio through the pipeline. If anyone in this thread has experience building macOS audio HAL plugins and wants to collaborate, we're very open to contributions, RCLI is MIT.

                                                                              • jawns 16 hours ago
                                                                                Based on the demo video, the TTS sounds like it's 10 years out of date. I would not enjoy interacting with it.
                                                                                • sanchitmonga22 9 hours ago
                                                                                  The default TTS voice (Piper) is a lightweight model optimized for speed over quality. It's fast but yeah, it doesn't sound great.

                                                                                  If you install Kokoro TTS (rcli models > TTS section), the voice quality is dramatically better, it's a neural TTS model with 28 different voices. MetalRT synthesizes Kokoro at 178ms for short responses, so you don't pay a speed penalty for the upgrade.

                                                                                  We should probably make Kokoro the default or atleast make the upgrade path more obvious in the first-run experience. Fair feedback.

                                                                                  • AmanSwar 15 hours ago
                                                                                    Its kokoro TTS not ours, we have range of options.
                                                                                    • shubham2802 15 hours ago
                                                                                      Just need some few days to have our catalog of models out soon!!
                                                                                  • focusgroup0 17 hours ago
                                                                                    The fact that Apple didn't ship this in years after Siri acquisition is an indictment of its Product leadership
                                                                                    • sanchitmonga22 9 hours ago
                                                                                      Apple has the silicon, the frameworks (MLX, CoreML), and the models. The gap is putting it all together into a fast, unified on-device pipeline. That's what we're focused on, and honestly, we think Apple will eventually ship something similar natively. Until then, we're trying to show whats possible today on their hardware.
                                                                                      • liuliu 16 hours ago
                                                                                        This is not different from mlx-lm other than it uses a closed-source inference engine.
                                                                                        • sanchitmonga22 9 hours ago
                                                                                          Respectfully, the benchmarks show it is different.

                                                                                          MetalRT and mlx-lm use the exact same model files, identical 4-bit MLX weights. That makes it a pure engine-to-engine comparison:

                                                                                          LLM decode: MetalRT is 1.10-1.19x faster across all models tested

                                                                                          STT: 70s audio in 101ms vs 463ms (4.6x faster)

                                                                                          TTS: 178ms vs 493ms (2.8x faster)

                                                                                          mlx-lm is a general-purpose array computation framework that also supports inference. MetalRT is purpose-built for inference only. That focus is where the performance gap comes from.

                                                                                          You can reproduce these numbers yourself: rcli bench runs the same benchmarks we published. Full methodology: https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...

                                                                                          Yes, MetalRT is closed-source. We're transparent about that. The performance difference is the reason it exists.

                                                                                          • AmanSwar 16 hours ago
                                                                                            [dead]
                                                                                        • j45 16 hours ago
                                                                                          "Apple M3 or later required. MetalRT uses Metal 3.1 GPU features available on M3, M3 Pro, M3 Max, M4, and later chips. M1/M2 support is coming soon. On M1/M2, RCLI automatically falls back to the open-source llama.cpp engine."
                                                                                          • Tacite 16 hours ago
                                                                                            Funny you mention that because on their github they just pushed an update to say that it didn't work M3 and M4.
                                                                                            • shubham2802 15 hours ago
                                                                                              Sorry about that but this is what is being there in github : Apple M3 or later required. MetalRT uses Metal 3.1 GPU features available on M3, M3 Pro, M3 Max, M4, and later chips. M1/M2 support is coming soon. On M1/M2, RCLI automatically falls back to the open-source llama.cpp engine.
                                                                                          • john_strinlai 17 hours ago
                                                                                            i knew i recognized this name from somewhere.

                                                                                            they are a company that registers domains similar to their main one, and then uses those domains to spam people they scrape off of github without affecting their main domain reputation.

                                                                                            edit: here is the post https://news.ycombinator.com/item?id=47163885

                                                                                            ----

                                                                                            edit2: it appears that RunAnywhere is getting damage-control help by dang or tom.

                                                                                            this comment, at this time, has 23 upvotes yet is below 2 grey comments (i.e. <=0 upvotes) that were posted at roughly the same time (1 before, 1 after) -- strong evidence of artificial ordering by the moderators. gross.

                                                                                            • Imustaskforhelp 17 hours ago
                                                                                              Yup. The most crazy aspect was that they had bought the domain intentionally (just 1 month prior) that whole fiasco.

                                                                                              Maybe its just (n=2) that only we both remember this fiasco but I don't agree with that. I don't really understand how this got so so many upvotes in short frame of time especially given its history of not doing good things to say the very least... I am especially skeptical of it.

                                                                                              Thoughts?

                                                                                              Edit: I looked deeper into Sanchit's Hackernews id to find 3 days ago they posted the same thing as far as I can tell (the difference only being that it had runanywhere.ai domain than github.com/runanywhere but this can very well be because in hackernews you can't have two same links in small period of time so they are definitely skirting that law by pasting github link)

                                                                                              Another point, that post (https://news.ycombinator.com/item?id=47283498) got stuck at 5 points till right now (at time of writing)

                                                                                              So this got a lot more crazier now which is actually wild.

                                                                                              • john_strinlai 17 hours ago
                                                                                                i unfortunately dont know enough about vote patterns on hn, or what is expected/normal voting behavior.

                                                                                                what i do know is that their name is etched into my mind under the category of "shady, never do business with them".

                                                                                                • Imustaskforhelp 17 hours ago
                                                                                                  I was writing my initial comment and I had no mention to the voting behaviour until I accidentally reloaded or something to find the upvote rise by a decent amount. Then I got suspicious and then I reloaded again to see like in 20seconds or < 1 minute and saw the vote rise so much (read my other comment)

                                                                                                  I was writing the comment at time of 18 upvotes and then it went to 24 upvote all of a sudden that I had gone suspicious.

                                                                                                  see at 2026-03-10T17:38-39:00Z timeframe within this particular graph(0)

                                                                                                  (0):https://news.social-protocols.org/stats?id=47326101

                                                                                            • pzo 15 hours ago
                                                                                              FWIW this RCLI is only MIT license but their engine MetalRT is commercial. Not sure the license of their models I guess also not MIT. So IMHO this repo is misleading.

                                                                                              Not sure why they decided to reinvent the wheel and write yet another ML engine (MetalRT) which is proprietary. I would most likely bet on CoreML since it have support for ANE (apple NPU) or MLX.

                                                                                              Other popular repos for such tasks I would recommend:

                                                                                              https://github.com/FluidInference/FluidAudio

                                                                                              https://github.com/DePasqualeOrg/mlx-swift-audio

                                                                                              https://github.com/Blaizzy/mlx-audio

                                                                                              https://github.com/k2-fsa/sherpa-onnx

                                                                                              • sanchitmonga22 9 hours ago
                                                                                                Fair feedback on the README clarity, we've updated it to make the licensing distinction between RCLI (MIT) and MetalRT (proprietary) more prominent. That should have been clearer from day one.

                                                                                                On why we built MetalRT instead of using CoreML or MLX:

                                                                                                CoreML is optimized for classification and vision models, not autoregressive text generation. ANE is powerful for fixed-shape workloads but doesn't handle the dynamic shapes in LLM decode well.

                                                                                                MLX is much closer to what we need, and we respect what Apple has built. But MLX is a general-purpose array framework, it carries abstractions for developer ergonomics and portability that add overhead. MetalRT is purpose-built for inference only, and the numbers reflect that: 1.1-1.2x faster on LLMs (same model files) and 4.6x faster on STT.

                                                                                                We also needed one unified engine for LLM + STT + TTS rather than stitching three separate runtimes together. That doesn't exist in any of the alternatives listed.

                                                                                                The libraries you mentioned (FluidAudio, mlx-swift-audio, sherpa-onnx) are good projects. RCLI actually uses sherpa-onnx as it's fallback engine when MetalRT isn't installed. They solve different problems at different layers of the stack.

                                                                                                • shubham2802 15 hours ago
                                                                                                  Updating the readme asap - but thanks for the feedback. Also, please checkout few things : https://www.runanywhere.ai/blog/metalrt-speech-fastest-stt-t... https://www.runanywhere.ai/blog/metalrt-fastest-llm-decode-e...
                                                                                                  • antipaul 15 hours ago
                                                                                                    Nice list.

                                                                                                    What about for on-device RAG use cases?

                                                                                                    • sanchitmonga22 8 hours ago
                                                                                                      RCLI includes local RAG out of the box. You can ingest PDFs, DOCX, and plain text, then query by voice or text:

                                                                                                      rcli rag ingest ~/Documents/notes rcli ask --rag ~/Library/RCLI/index "summarize the project plan"

                                                                                                      It uses hybrid retrieval (vector + BM25 with Reciprocal Rank Fusion) and runs at ~4ms over 5K+ chunks. Embeddings are computed locally with Snowflake Arctic, so nothing leaves you're machine.

                                                                                                    • AmanSwar 15 hours ago
                                                                                                      [dead]
                                                                                                    • 7kmph 10 hours ago
                                                                                                      this is the company that cold emailed many people via email on GitHub.
                                                                                                      • david_shaw 17 hours ago
                                                                                                        I think the title should read "RunAnywhere," not "RunAnwhere."
                                                                                                        • Imustaskforhelp 17 hours ago
                                                                                                          Dang has changed the title and it seems that he may have had a minor error doing it . Must have been a typo from his side changing it and that's okay! I think that Dang will update it sooner than later.

                                                                                                          Edit: just reloaded, its fixed now.

                                                                                                          • dang 15 hours ago
                                                                                                            tomhow fixed it. I had looked at it multiple times and not noticed!
                                                                                                        • Imustaskforhelp 17 hours ago
                                                                                                          I am just gonna link the stats of this hackernews post[0] and let public decide the rest because for context, this is same company which was mentioned in a blow-up post 12 days ago which had gotten 600 upvotes and they didn't respond back then[1] (I have found it hard for posts to have such a 2x factor within minutes of posting, that's just my personal observation. Usually one gets it after an hour or two or three.)

                                                                                                          I was curious so I did some more research within the company to find more shady stuff going on like intentionally buying new domains a month prior to send that spam to not have the mail reputation of their website down. You can read my comment here[2]

                                                                                                          Just to be on the safe side here, @dang (yes pinging doesn't work but still), can you give us some average stats of who are the people who upvoted this and an internal investigation if botting was done. I can be wrong about it and I don't ever mean to harm any company but I can't in good faith understand this. Some stats

                                                                                                          Some stats I would want are: Average Karma/Words written/Date of the accounts who upvoted this post. I'd also like to know what the conclusion of internal investigation (might be) if one takes place.

                                                                                                          [There is a bit of conflicts of interest with this being a YC product but I think that I trust hackernews moderator and dang to do what's right yeah]

                                                                                                          I am just skeptical, that's all, and this is my opinion. I just want to provide some historical context into this company and I hope that I am not extrapolating too much.

                                                                                                          It's just really strange to me, that's all.

                                                                                                          [0]: https://news.social-protocols.org/stats?id=47326101 (see the expected upvotes vs real upvotes and the context of this app and negative reception and everything combined)

                                                                                                          [1]: Tell HN: YC companies scrape GitHub activity, send spam emails to users: https://news.ycombinator.com/item?id=47163885

                                                                                                          [2]:https://news.ycombinator.com/reply?id=47165788

                                                                                                          • dang 17 hours ago
                                                                                                            The upvotes on the current post are fine - the reason you saw the submission rise in rank is that startup launch posts by YC startups get special placement on the front page (this is in the FAQ: https://news.ycombinator.com/newsfaq.html). Not every such post does, but some do.

                                                                                                            In other words, your perception wasn't wrong, but the interpretation was off. I've put "Launch HN" and "YC W26" back in the title to make that clearer - I edited them out earlier, which was my mistake.

                                                                                                            As for the booster comments, those are pretty common on launch threads and often pretty innocent - most people who aren't active HN users have no idea that it's against the rules. We do our best to communicate about that, but it's not a cardinal sin—there are far worse offenses.

                                                                                                            • john_strinlai 16 hours ago
                                                                                                              hi dang. while you are here -- are comments artificially ordered on this post?

                                                                                                              https://news.ycombinator.com/item?id=47326953 is grey (i.e <=0 karma). my top-level comment is at 14 karma. we posted within 15 minutes of each other. their comment is higher up the page. ive never seen something like that before.

                                                                                                              the two posts calling out unethical behavior have been living at the bottom of this post the entire time, until a couple of actually [flagged] comments ended up under them.

                                                                                                              i do not care about the karma itself, at all. but i do care to know if launch/show posts have comment sections with cherry-picked ordering or organic ordering.

                                                                                                              edit 2: i am at 19 points, and now below two grey (<=0 karma) comments (https://news.ycombinator.com/item?id=47326455). whats up dang?

                                                                                                              edit 3 (~1 hour later): you've responded to a handful of other comments and ignored this one as it becomes more and more evident that someone has artificially ordered the comments to ensure that critical comments are at the bottom of the page. it has shattered my perception of show/launch posts to know that you manually curate the comments to form a specific narrative. i really (naively) thought you guys were much more neutral about that sort of thing.

                                                                                                              • dang 15 hours ago
                                                                                                                > you've responded to a handful of other comments and ignored this one

                                                                                                                I hadn't seen this until 30 seconds ago. The assumption of moderator omniscience leads to a lot of mistaken conclusions!

                                                                                                                Sure, we marked the offtopic comments offtopic, which lowers them on the page. This is standard HN moderation. If we didn't do this, then nearly every thread would be choked with something offtopic at the top.

                                                                                                                At the same time, we haven't killed the posts or put them in a "stub for offtopicness" [1] like we otherwise would. They're still here for people who want to read them, while at the same time the main discussion can be about the main topic, which is the startup launch.

                                                                                                                HN is actively moderated and always has been. Downweighting offtopic/generic comments is one of the biggest things we've ever discovered for improving the quality of the threads. For us it's about the quality of the site as a whole, not specific narratives, but of course everyone can (and will) make up their own mind about this. What I can tell you is (a) the way we do these things has been stable for a long time (HN time is measured in decades, not years), and (b) we're always willing to answer questions about it.

                                                                                                                Oh, and (3) - when YC or a YC-funded startup is part of a story, then we moderate less than we otherwise would [2]. We do still moderate, though—we just do it less.

                                                                                                                [1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

                                                                                                                [2] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

                                                                                                                • john_strinlai 15 hours ago
                                                                                                                  im sorry, but i disagree quite strongly with your suggestion that a comment about the unethical behavior of a company is off-topic on a post by that company launching their product.

                                                                                                                  especially when that company wants you to curl | bash their code onto your machine -- potential users deserve to know that despite being a YC-backed company (which would typically be a positive indicator, people may reduce their scrutiny) that they have been caught scraping data they shouldnt be, and then using that data for marketing, and refuse to respond to anyone who bring it up.

                                                                                                                  but it is your world and i am just living in it, so i will carry on. i appreciate that you did not collapse them.

                                                                                                                  • Imustaskforhelp 14 hours ago
                                                                                                                    Dang, Although I really appreciate the work you put in. I am not quite sure if the criticism of said company because of genuine reasons and suspicions about it in launch HN of said company is offtopic. As John said, I agree with him on the aspect that the definition of offtopic might vary for us then it seems.

                                                                                                                    But if I may ask, doesn't the policy of moderate less not more your (3) point opposite to what you said about offtopic from how I perceive it?

                                                                                                                    > Sure, we marked the offtopic comments offtopic, which lowers them on the page. This is standard HN moderation. If we didn't do this, then nearly every thread would be choked with something offtopic at the top.

                                                                                                                    >Oh, and (3) - when YC or a YC-funded startup is part of a story, then we moderate less than we otherwise would [2]. We do still moderate, though—we just do it less.

                                                                                                                    I would suggest that the minor disagreements that we have is because these two points seem contradictory to me from how I perceive it. I would suggest (if possible) to moderate less as you mention not more and let the order of ranking be natural which in this case might be that john's comments can come at the first place for example. Because you are moderating it by putting it into downweighting it and that's one of the concerns that we sort of have.

                                                                                                                    > At the same time, we haven't killed the posts or put them in a "stub for offtopicness" [1] like we otherwise would. They're still here for people who want to read them, while at the same time the main discussion can be about the main topic, which is the startup launch.

                                                                                                                    Also regarding this. I might have to trust ya when you say this but I do feel like its within the HN spirit that when a company gets launched, the critisims of the company and its past gets talked about.

                                                                                                                    On top of my head I remember some VPN company sometime ago which used TEE encryption by intel. One of the first comments or similar was about how the guy had shady past because they were the former server owner of liberachat and some controversy surrounding it and how they wouldn't want to run said VPN (other comments were about the trust within Intel in general)

                                                                                                                    My point is that this might be considered offtopic according to ya now but those were active and quite on top. So maybe I am recollecting events differently but it does seem to have some idea that this doesn't seem offtopic (atleast to me, I could be wrong though, I usually am but still)

                                                                                                                    With all of this in mind, I don't think that its necessarily offtopic Sir. I'd really appreciate it if for better accuracy you can have the flow of comments be natural in this regards in this particular thread as We'd really appreciate it if possible. Thanks!

                                                                                                                    Thoughts?

                                                                                                                    • tomhow 12 hours ago
                                                                                                                      This post is about the launch of a YC company and its product. It's reasonable that when a company launches a product, the discussion is focused on the product that it is launching. We moderate that way whether it's a YC company's Launch HN or anyone else posting a Show HN. Keeping discussion on-topic is one of the most important things we moderators do, and is the main reason HN is a place where people like to participate.

                                                                                                                      Criticisms of the company for past conduct are valid, and we're leaving the criticisms here for everyone to see as per our long-standing policy. But it should be viewed in context: a thread about that event already had 11 hours on the front page 12 days ago. They have been heavily criticized for what they did, are carrying reputational consequences, and they should have learned that the HN community and GitHub users and management are strongly disapproving of this kind of activity. They should be given the chance to learn and reform and be judged for what they do now and into the future.

                                                                                                                      The most on-topic thing to discuss in this thread is the company's product, and there seems to be some good discussion about that.

                                                                                                                      • wtallis 12 hours ago
                                                                                                                        The idea of giving spammers a second chance seems truly bizarre to me. Have you ever un-blocked an email address that you previously blacklisted for spamming you? Do you think recipients of spam from this company want to give them a second chance?

                                                                                                                        I'm not necessarily saying the people behind this should be completely blacklisted from the entire industry, but when a company earns a place on my block list for behavior like that, it's permanent. They need to start over with a different business model. Failure of the company is a reasonable and fair consequence for such scandalous behavior, especially for a company at such an early stage.

                                                                                                                        • oh_fiddlesticks 11 hours ago
                                                                                                                          So a company decided to send cold outbound to a targeted audience from a dataset they gathered from a public social network... so what?

                                                                                                                          > "Scandalous behaviour" really? In the same year as the Epstein files?

                                                                                                                          Wishing a companies total demise over such a trivial matter? Honestly?

                                                                                                                          All this prissiness about some unwanted emails...

                                                                                                                          • wtallis 11 hours ago
                                                                                                                            Most startups fail. Some deserve it.
                                                                                                                        • Imustaskforhelp 11 hours ago
                                                                                                                          I think I agree with ya, Although I remember some VPN company related to liberachat ownership where something offtopic wasn't handled such way. In general, I think that if both you and dang say something consistent. Then I do trust you and hackernews moderation for being fairly transparent about it and I really appreciate it. It could very well be that I remember that instant where off-topic=on-topic and considering you guys are human (Underappreciated humans keeping this site usually clean!!)

                                                                                                                          I think that some of it is okay to happen and I won't question further about it. We can have some minor differences about Off-topic and On-topic and I am okay with this difference and I really appreciate the work done towards Hackernews by the moderation team in aggregate :)

                                                                                                                          And in essence yeah, off-topic and manual downweighing is still pretty okay with me for what its worth and it seems to be something impartial and not a preferential treatment to it being YC company so I suppose that's fair even :)

                                                                                                                          > Criticisms of the company for past conduct are valid, and we're leaving the criticisms here for everyone to see as per our long-standing policy. But it should be viewed in context: a thread about that event already had 11 hours on the front page 12 days ago. They have been heavily criticized for what they did, are carrying reputational consequences, and they should have learned that the HN community and GitHub users and management are strongly disapproving of this kind of activity. They should be given the chance to learn and reform and be judged for what they do now and into the future.

                                                                                                                          I feel like I am quite a forgiving person actually. They haven't responded to any comments on that thread or on these comments. Now, that could be because they don't want to get themselves into any controversy and that's totally fine by me Tomhow.

                                                                                                                          My issue is that You mention how they were critized for what they did 12 days ago and you ask me to judge them for what they are now but 12 days isn't a great period of time if you may understand and the only reason they stopped was because they got caught in public essentially. So its even hard for me to forgive them especially when I had realized that they bought the domain intentionally a month ago so it was the intention / pre-meditated thought on doing what they were doing and going ahead with it that I had found issue with. Accidents happen and people are really forgiving actually but its hard to forgive something pre-meditated in such short period of time.

                                                                                                                          And I took a few hours of my life trying to get this point across in this thread simply because I want future companies to know that there is a better path than doing something bad pre-meditated (Accidents will always happen but once again, people are/can be forgiving of them but this wasn't an accident sadly)

                                                                                                                          I want future companies to know that the upshot of doing anything clearly bad intentionally isn't worth it. That's simply it. Just don't try to do anything bad intentionally (for more profit) and you are on my good side :)

                                                                                                                          And money can sometimes bend morality, so if anything, its also my criticism of them wants me that if I ever create something, to think about what I have said publicly as well to just make the equation tend slightly towards the approach that I expect from other companies into anything that (if) I build myself.

                                                                                                                          Even though its hard to like them and they might not have gained at the moment, They will have moments in the future as well. So I wish that they take a lesson and from a more truly human respective (we are all flesh after all) into thinking about the ethics of the situation. It's uncomfortable but that's what needed imo sometimes.

                                                                                                                          Aside from that, I still wish them luck in their life and their project as they are still human as well and I genuinely wish them that within the moments of future, they prove themselves when they get the chance and I would hope that they one day can regain my trust back too if thats the case haha!

                                                                                                                          It's also worth mentioning that they are Indians and I am Indian too. I am bullish about Indian startup and its culture and more to a degree my nation at the moment and that's actually why I am harsher on them too. I expect better from the people of my nation and to a degree, my community and I come to this position with love/passion, and in that regards I will continue expecting better and wishing for them better at the same time.

                                                                                                                          So I hope I could get my point across to you and once again, I appreciate both your work and dang's work. Have a nice day Sir!

                                                                                                                          • tomhow 4 hours ago
                                                                                                                            Thanks for sharing your thoughts in such detail. It's an understandable position.

                                                                                                                            Our position is that we give many people second (and more) chances here, whether or not they're YC founders, startup founders, or hackers.

                                                                                                                    • Imustaskforhelp 16 hours ago
                                                                                                                      Adding onto it, My comments are also ranked low. This comment on which dang replied has 4 upvotes which I think that this is at the 4th last of this post and the other comment that I made on your comment where I responded to ya has 3 upvotes.
                                                                                                                    • Imustaskforhelp 17 hours ago
                                                                                                                      Thanks dang but can you please explain there being two accounts who wrote something very small comment and one account being completely new and the other being 7 months old only being invoked in this case.

                                                                                                                      Clearly I am not the only one here as john_strinlai here seems to have had somewhat of the same conclusion as me.

                                                                                                                      Dang I know you care about this community so can you please talk more what you think about this in particular as well.

                                                                                                                      I understand that YC companies get preferential treatment, Fine by me. But this feels something larger to me

                                                                                                                      I have written everything that I could find in this thread from the same post being shown here 3 days ago in anywhere.ai link to now changing to github to skirt off HN rule that same link can't be posted in short period of time and everything.

                                                                                                                      This feels somewhat intentional just like the spam issue, I hope you understand what I mean.

                                                                                                                      (If you also feel suspicious, Can you then do a basic analysis/investigiation with all of these suspicious points in mind and everything please as well and upload the results in an anonymous way if possible?)

                                                                                                                      I wish you to have a nice day and waiting for your thoughts on all of this.

                                                                                                                      • dang 15 hours ago
                                                                                                                        I'm happy to answer as best I can! but I'm having trouble understanding what you're specifically asking.

                                                                                                                        If https://news.ycombinator.com/item?id=47327129 and https://news.ycombinator.com/item?id=47328465 don't answer your questions, can you maybe try picking the most important question and making it as specific as you can? Then I can take a crack at that and we can go from there.

                                                                                                                        • Imustaskforhelp 14 hours ago
                                                                                                                          Sure let me better explain what I'd like if possible.

                                                                                                                          https://news.social-protocols.org/stats?id=47326101

                                                                                                                          I'd like to have some information within 1) time frame of this from 0-80 upvotes which feels the most upward of this curve and 2) time frame of the whole article and I would like three datapoints in all of this:

                                                                                                                          So imagine we take every people who upvoted this thread and then we find three data points and average (median not mean for better representation) them together for anonymity purposes:

                                                                                                                          1. The date of the accounts

                                                                                                                          2. The karma of the accounts

                                                                                                                          3. The words written by those accounts (optional) [But I have done some work on that and I have found this to be a good factor on if someone is truly a bot or not]

                                                                                                                          Because, Although you mention that the upvotes are fine. I'd still really appreciate it if we can find any form of data backing that statement up and hopefully knowing that nothing fishy is going on as you may understand that this company has done a lot of fishy stuff in its past and all the fishy stuff which I have talked about in this thread too makes me feel like just a minor bit more deeper look into it/transparency would personally be really appreciated and the community would like it too!

                                                                                                                          Have a nice day dang and looking forward to your next comment!

                                                                                                                          • dang 14 hours ago
                                                                                                                            Sorry, but this is much too complicated for me to follow, and I believe I've already answered the main points: what happened to the thread and what was going on with the upvotes and comments.
                                                                                                                            • Imustaskforhelp 13 hours ago
                                                                                                                              That's fair dang. Sorry if it got too complicated. I trust ya in that case that the comments are fine from your one of comments to me here. Any case of botting must have been spotted by you guys if there was a case.

                                                                                                                              It was just that they raised quite a large number of red alerts for me personally with the whole thing.

                                                                                                                              Just to be on the same page, Is there anything suspicious about the upvotes in this page in sense of being upvoted by bot accounts in general from your observation especially during the start of this thread?

                                                                                                                              Can you please just talk more about this as in confirmation because I still have some disbelief about it given its shady history and the whole way this thread unfolded. I feel as if there feels some likelihood to me that this post got (bot-upvoted?) at some point or the other.

                                                                                                                              Or did all of the upvotes came from genuine account and it was just that this got to front page due to hackernews preferential treatment? Can you just talk more about it because if anything, I might still learn something new either way.

                                                                                                                              Dang, Has there ever been any YC startup which employed in shady practice like using bots to upvote their HN posts or use bot accounts which got caught in the history of this website?

                                                                                                                  • samuel_grupa_ai 14 hours ago
                                                                                                                    [flagged]
                                                                                                                    • dsalzman 17 hours ago
                                                                                                                      [flagged]
                                                                                                                      • iharnoor 17 hours ago
                                                                                                                        [flagged]
                                                                                                                        • Imustaskforhelp 17 hours ago
                                                                                                                          This is a 7 month old account which has only responded to this particular comment.

                                                                                                                          And sorry to say but I don't think that Lets go!! is a valid comment, this makes me even more suspicious.

                                                                                                                          Especially given the history and suspicions I already had.

                                                                                                                        • josuediaz 17 hours ago
                                                                                                                          [flagged]
                                                                                                                          • john_strinlai 17 hours ago
                                                                                                                            josuediaz registered 4 minutes ago

                                                                                                                            iharnoor 1 karma, 1 comment, in this thread.

                                                                                                                            two posts pointing out their extremely unethical spam behavior both shot down to the very bottom of the post. apparently suspicious voting behavior.

                                                                                                                            what the hell is going on?

                                                                                                                            • Imustaskforhelp 17 hours ago
                                                                                                                              Yeah I am wondering the same thing.

                                                                                                                              I was gonna comment about this guy and iharnoor which is 7 month old account who literally only said "lets go" here

                                                                                                                              This sort of makes me even more suspicious john especially iharnoor

                                                                                                                              I wasn't responding because I was making archive link of all of this so that even messages deleted can have some basis of confirmation.

                                                                                                                          • sidv1711_ 12 hours ago
                                                                                                                            Let's goo!!