LiteBox is a sandboxing library OS that drastically cuts down the interface to the host, thereby reducing attack surface. It focuses on easy interop of various "North" shims and "South" platforms. LiteBox is designed for usage in both kernel and non-kernel scenarios.
LiteBox exposes a Rust-y nix/rustix-inspired "North" interface when it is provided a Platform interface at its "South". These interfaces allow for a wide variety of use-cases, easily allowing for connection between any of the North--South pairs.
Example use cases include:
- Running unmodified Linux programs on Windows
- Sandboxing Linux applications on Linux
- Run programs on top of SEV SNP
- Running OP-TEE programs on Linux
- Running on LVBS
This might actually be my favourite use: I always thought WSL2 was a kludge, and WSL1 to be somewhat the fulfilment of the "personality modules" promise of Windows NT.
Yup WSL feels closer to the Services for Unix which has been around since NT 4/5.
It was sad to see WSL2 taking the path of least resistance, that decision has always felt TPM driven ("we got unexpected success with WSL and people are asking for more, deliver xxx by Q4! No I don't care _how_ you do it!")
The amount of techno jargon marketing speak in this readme is impressive. I’m pretty well versed in most things computers, but it took me a long time to figure out what the heck this thing is good for. Leave it to Microsoft to try to rename lots of existing ideas and try to claim they’ve invented something amazing when it’s IMHO not all that useful.
With how buggy their flagship OS has become, why would I trust anything else they release to be better? Or even if it does work well now, why should I expect it to stay that way? Microsoft has burned through all possible goodwill at this point, at least for me.
I spent 15 years as a senior dev on the Visual Studio team followed by 5 years on the Xcode team at Apple.
Individual engineers can be talented, professional, and end-user focused. Most of that effort gets lost when PMs refuse to work with each other in a coherent manner. Most of the major issues we ran into weren’t engineering bugs per se, they were the result of management refusing to allow teams to communicate effectively.
When we were first building out the original C# functionality, the C# team refused to talk to the existing compiler teams. I spent more time acting as a go-between than I did solving actual technical problems.
Good people can produce crappy software in that environment.
Not op, and I generally agree with your assumption but not for Microsoft, as I don't think it's limited to Windows:
Teams, Office (especially online), One Drive, SharePoint, Azure, GitHub, LinkedIn, all became very shitty and partially unusable with increasing number of weird bugs or problems lately.
If product->quality_x, I'm okay with employee->?quality_x — but not with either employee->quality_x or employer->!quality_x. A better thing to remember is that people have themselves to feed. Of those 100k engineers, how many can say "no, you don't, Satya, ain't no besmirching my code with slop"?
Maybe not, there are plenty of hard things to do at Microsoft scale, hypervisors (which I guess could count as "OS" but maybe not "Windows" in the consumer-product line sense), compilers, languages, hardware since Microsoft is doing that too, browsers (although the hard part is chrome-based, probably they contribute to it), databases, distributed systems for cloud products, etc. Plenty of hard things to do.
OS is such a broad term, especially when applied to Windows which is closer to a Linux distro. Is it the kernel? Windows is fine there as by all accounts the issues are higher up. They’ve had some problems with their update process which is surprising - historically that team would have been populated by the better engineers. most of the other problems have been in the shell and UI where good engineering discipline is not to be quite as expected.
Yes, but the OS fundamentals are for Azure first, Windows last.
Azure makes money, 50% of Windows computers are basically free and need to get you to sign up for a subscription some how. The other 50% are Windows Pro/Enterprise, but MS assumes they'll get that money forever so doesn't put any resources into that. In 10 years the kids switching to Linux on desktop today will be in charge of the business deals and switch corporations to linux because they're not scared of it like the current business IT leaders
They are not free. OEM costs money. Hence with every laptop with Windows preinstalled, you pay a fraction to Microsoft, even if you immediately uninstall and add Linux.
I don't think people typically have so much choice about it. Everyone is just trying to feed their families and enjoy their life. The job market is a little tough right now, I think, for software engineers. No?
I know a few personally that left their stable job to be hired and fired in the same month and remain unemployed six months later. Very sad.
What a ridiculous excuse. People who join ICE to brutalize minorities and protestors are just trying to feed their families too, then. No?
Working for Microsoft doesn’t make them bad engineers or bad people, but it does make them Microsoft employees. And they get to bear its reputation whether they want to or not. If it makes them uncomfortable then they should make a change or grow thicker skin.
Oversaturation of the labor supply for software engineers has been looming for a while now. Gen Z was sold on infinite growth in the ZIRP era which was never going to happen, but everyone still jumped in. What we’re seeing is structural unemployment. Not everyone’s gonna make it.
There are companies I wouldn't candidate for, even with kids I think, although it's hard to say, I don't have kids, and apparently there is a mind-shift happening when you get one. Oracle, Palantir come to mind. But maybe not Microsoft, I don't know about that one. It's probably bad, but maybe not "I prefer to watch my kids starving" kind of bad.
Look at the list here. 2084 pages already, 12 entries per page: that's 25 000 criminals. They're listing their crimes. 25 000 criminals already arrested is a huge lot.
Be honest with yourself and think about the victims.
I'd say a lot of the people joining ICE do believe the US has already enough criminals that are US citizens and want to help stop the insanity that is mass uncontrolled migration.
Out of 600 000 people arrested by ICE, as I understand it already 25 000 are violent criminals that we know of. That's more nearly 5% of all those arrested. 1 in 20 people.
Where do you draw the limit? You want full open borders, but at what cost?
I read a lot of "Arrested for: kidnapping, rape".
Is, say, 1 in 100 people coming in being a criminal OK?
Where do you draw the line?
Dems are literally fighting so that sanctuary cities do not hand over convicted criminals to ICE: so that one day they can be released in the streets.
Is this what you want to fight for?
Are you that convinced, from your moral high ground where you judge Microsoft employees and ICE agents, that you'll be on the right side of history?
You are missing out the entire point. In a justice system, a single innocent in prison is a thousand times worse than a free criminal. This is where most people draw the line if they think about it. Because when you put innocents under arrest, suddenly you are no better than dictatorships and terrorist state.
The real justice is investing in a security system that tracks, investigates, and condemn actual criminals, in a targetted way, so that honest people can live securely and free. Believe it or not, plenty of countries manage to do that pretty well.
Well considering the administration has repeatedly called Alex Pretti and Renee Nicole Good "TERRORISTS", I would consider "1400 ISIS terrorists" a highly dubious statistic, in fact in a brief search for a reputable source of your claim of "1400 ISIS terrorists" I've not found any source for that, link???
You ask "Is, say, 1 in 100 people coming in being a criminal OK?"
Well considering that about 1.4% of the overall population is current incarcerated in our "Land of the Free", yeah 1 in 100 would be an improvement!
People are against ICE in growing numbers because of their tactics of run around hide their identities like bandits and gestapo thugs. Their ignoring of court orders, constant lies, constant blatant violations of the 1st, 2nd, 4th amendments constantly, and violations of rights of people such as immigrants following the processes of asylum, several citizens that have been arrested wrongly, and the terrible tortuous treatment an the joy and pride this corrupt disgusting administration takes in being cruel to people!
They seem to be alienating a lot of their users right now in a lot of different products. There's a significant surge in open source software right now and Linux and all the people that are coming over are a bit more than usual. Their customer base seems tired of the game.
This is not about individual employees. It’s in the nature of being an employee to be beholden to what’s incentivized by their company’s management and structure.
Don’t employees have any say in some of the design , implementation, and quality bar? Management folks are employees as well. But perhaps they prefer the paycheck to voicing concerns around bad decisions. Nothing wrong with that but throwing all the blame on faceless management and structure seems not right since it evolves from collective activities.
“Show me the incentives and I’ll tell you the outcome” is exactly about this situation. People who do what they feel is right may be able to do so as long as it doesn’t conflict with company policy, but when it does (say you spend a little more time on perfecting a feature), it gets noticed and eventually corrected.
Skilled engineers in an environment that doesn't care about quality may become dull, or simply be forced by the system they are in to not care. In practice they are just like us and so I assume they would find outlets in their free time.
I haven't spoken to a Microsoft developer in a while because there are few in the hacker communities I'm around (go figure?) so not entirely sure though. I want to understand.
These giant firms aren’t uniform monoliths, especially MS.
Microsoft has some clear ‘A’ teams (compilers, industry leading languages, F*, pioneering web tech, OS innovations, etc), but also ‘B’, ‘C’ and ‘D’ teams, and MS is often reactively chasing industry trends. They’re industry leaders, but also victims of their Office, Windows, and Cloud teams pooping on one another at critical market junctures.
In .Net land we can inspect their library code. A number of these ‘Enterprise’ packages around their ‘Enterprise’ solutions are … just passable. Often something you’d write a proper version of to avoid clear issues. When our juniors are delivering better than their official offerings, in light of wizardry being displayed elsewhere, I think we are seeing systematic effects of corporate culture and customer base.
>Kernel and low level stuff are actually very stable and good.
This. A while ago a build of Win 11 was shared/leaked that was tailored for the Chinese government called "Windows G" and it had all the ads, games, telemetry, anti-malware and other bullshit removed and it flew on 4GB RAM. So Microsoft CAN DO IT, if they actually want to, they just don't want to for users.
You can get something similar yourself at home running all the debloat tools out there but since they're not officially supported, either you'll break future windows updates, or the future windows updates will break your setup, so it's not worth it.
Talked about back in the Vista days publicly (I cannot find the articles now) - Microsoft has commitments to their hardware partners to help keep the hardware market from collapsing.
So they are not incentivized to keep Win32_Lean_N_Mean, but instead to put up artificial limits on how old of hardware can run W11.
I have no insider knowledge here, just this is a thing which get talked about around major Windows releases historically.
If anything, Microsoft has a lot of problems because they support a wide variety of crappy hardware and allow just about anyone to write kernel level sw (drivers). Not sure if this changed, but they used to run in the ring0 even.
This was most evident back in the 90s when they shipped NT4: extremely stable as opposed to Win95 which introduced the infamous BSOD. But it supported everything, and NT4 had HW support on par with Linux (i.e. almost nothing from the cheap vendors).
NT4 started with a kernel mode, user mode, security model and drivers had to be written and validated accordingly.
9x, me, and even compatibility parts of XP (up to some service patch IIRC? Might have been SP2) would still allow dos mode realtime BS for any driver that wanted.
I loath all the dang software modems too cheep to ship a decent device in a single unit and instead slice off the user's already constrained resources.
Heh, who else remembers the golden benchmark, a US Robotics 56k hw modem (the only one I could find locally was an external one too) to get online in either NT4 or Linux. But when I finally did save for one, I could fully leave Windows behind in 1998.
>Microsoft has commitments to their hardware partners to help keep the hardware market from collapsing.
Citation needed since that makes no logical sense. You want to sell your SW product to the most common denominator to increase your sales, not to a market of HW that people don't yet have. Sounds like FUD.
>but instead to put up artificial limits on how old of hardware can run W11
They're not artificial. POPCNT / SSE4.2 became a hard requirement starting with Windows 11 24H2 (2024) (but that's for older CPUs), and only intel 8th gen and up have well functioning support for Virtualization-Based Security (VBS), HVCI (Hypervisor-protected Code Integrity), and MBEC (Mode-Based Execution Control). That's besides the TPM 2.0 which isn't actually a hard requirement or feature used by everyone, the other ones are way more important.
So at which point do we consider HW-based security a necessity instead of an artificial limit? With the ever increase in vulnerabilities and attack vectors, you gotta rip the bandaid at some point.
> You want to sell your SW product to the most common denominator to increase your sales, not to a market of HW that people don't yet have.
A key difference between regular software and Windows is that almost nobody buys Windows, they get it pre-installed on a new PC. So a new PC purchase means a new Windows license.
I've been starting with Tiny11 and then running the debloat scripts against it. Reduces the memory footprint to about 2GB and have found zero compatibility problems with doing this. You just have to use curl or something to download a browser because you won't even have Edge.
> Kernel and low level stuff are actually very stable and good.
In their intended applications, which might or might not be the ones you need.
The slowness of the filesystem that necessitated a whole custom caching layer in Git for Windows, or the slowness of process creation that necessitated adding “picoprocesses” to the kernel so that WSL1 would perform acceptably and still wasn’t enough for it to survive, those are entirely due to the kernel’s archtecture.
It’s not necessarily a huge deal that NT makes a bad substrate for Unix, even if POSIX support has been in the product requirements since before Win32 was conceived. I agree with the MSR paper[1] on fork(), for instance. But for a Unix-head, the “good” in your statement comes with important caveats. The filesystem is in particular so slow that Windows users will unironically claim that Ripgrep is slow and build their own NTFS parsers to sell as the fix[2].
But there's another issue which is what cripples windows for dev! NTFS has a terrible design flaw which is the fact that small files, under 640 bytes, are stored in the MFT. The MFT ends up having serious lock contention so lots of small file changes are slow. This screws up anything Unixy and git horribly.
WSL1 was built on top of that problem which was one of the many reasons it was slow as molasses.
> NTFS has a terrible design flaw which is the fact that small files, under 640 bytes, are stored in the MFT.
Ext4 also stores small (~150B) files inside the inode[1], and so do a number of other filesystems[2]? NTFS was unusually early to the party, but if you’re right that it’s problematic there then something else must also be wrong (perhaps with the locking?) to make it so.
This is not due to slowness of the file system. Native ntfs tools are much faster than Unix ones in some situations. The issue is that running Unix software on windows will naturally have a performance impact. You see the same thing in reverse using Wine on Linux. Windows uses a different design for IO so requires software to be written with that design in mind.
> Native ntfs tools are much faster than Unix ones in some situations. The issue is that running Unix software on windows will naturally have a performance impact. You see the same thing in reverse using Wine on Linux.
Not true. There are increasingly more cases where Windows software, written with Windows in mind and only tested on Windows, performs better atop Wine.
Sure, there are interface incompatibilities that naturally create performance penalties, but a lot of stuff maps 1:1, and Windows was historically designed to support multiple user-space ABIs; Win32 calls are broken down into native kernel calls by kernel32, advapi32, etc., for example, similar to how libc works on Unix-like operating systems.
It's pretty typical these days for software, particularly games of the DX9-11 eras to perform better on Wine/Proton then they do under native Windows on the same hardware.
Right, by “file system” here I mean all of the layers between the application talking in terms of named files and whatever first starts talking in terms of block addresses.
Also, as far as my (very limited) understanding goes, there are more architectural performance problems than just filters (and, to me, filters don’t necessarily sound like performance bankruptcy, provided the filter in question isn’t mandatory, un-removable Microsoft Defender). I seem to remember that path parsing is accomplished in NT by each handler chopping off the initial portion that it understands and passing the remaining suffix to the next one as an uninterpreted string (cf. COM monikers), unlike Unix where the slash-separated list is baked into the architecture, and the former design makes it much harder to have (what Unix calls) a “dentry cache” that would allow the kernel to look up meanings of popular names without going through the filesystem(s).
NTFS will perform directory B+-tree lookups (this is where it walks the path) until it finds the requested file. The Cache Manager caches these B+-trees.
From there, it hits the MFT, finds the specific record for the file, loads the MFT record, and ultimately returns the FILE_OBJECT to the I/O Manager and it bubbles up the chain back to (presumably) Win32. The MFT is just a linear array of records, which include file and directories (directory records are just a record with directory = true, essentially).
Obviously simplified. Windows Internals will be your friend, if you want to know more.
Thanks for the explanation! Linux, meanwhile, will[1] in the normal case walk a sequence[2] of hash tables (representing incomplete but up-to-date views of directories) before hitting the filesystem’s vtable or the block I/O layer at all, and on the fast path[3] taking no locks other than the RCU read lock.
[2] I was under the impression that it could look up an entire path at once when I wrote my grandparent comment; it seems I was wrong, which on reflection makes sense given you can move directories.
Even with Defender etc off, it is not fun. Lots of small file IO brings it on its knees. Some wants to blame the Windows I/O system, I don't know, but what I do know is that when people choose NTFS it is because they haven't an alternative. Nobody chooses it based on its quality attributes. I dare to say there is no NTFS system that is faster than an EXT4 system.
NTFS on Linux should be near-par with ext4 on Linux.
Remember, I said the _file system_ was just fine. It's that extensible architecture above all file systems on NT that causes grief.
The only method to 'turn off' Defender is to use DevDrive, which enforces ReFS, and even then you only get async Defender, it's not possible to completely disable.
This isn't supposed to replace Windows, and it isn't a GUI desktop operating system at all. I doubt anyone working on this has anything to do with the modern Windows desktop UX.
OP wasn't suggesting it was, just that the lack of quality in one significant area of the company's output leads to a lack of confidence in other products that they release.
Given anything the size of Microsoft, it's not a good assumption. MS has large research teams that produce really interesting things. Their output is unrelated to released products.
Maybe we need secure attestation for sandbox to be protected against compromised host :)
It does sound hard, and might need to employ homomorphic encryption with hw help for any memory access after code has been also verifiably unaltered through (uncompromised) hw attestation.
I know windows 11 is super buggy and riddled with issues (and the copilot mess), but I'm starting to feel there's a weird echo chamber around these forums that don't even bother looking at what the product or repository is, and automatically assume it's bad 'cause it's from Microsoft.
Once the amount of bad software coming out of a shop rises over 50% this becomes a sane assumption, since it is more likely than not, that it is trash coming out of that shop. So in case of MS it does seem a reasonable assumption to make.
I use Windows 11 all day and can't agree it's buggy at all, compared to Windows of the past it's very reliable. The worst I can say is they've made some poor decisions about the defaults around ads in the UI. But all of that is easy to turn off.
Windows is ultimately a lot more complex, and not open source. This also builds on the Linux ecosystem, so even if it comes from Microsoft, I imagine engineering culture is different from that on Windows and especially their online platforms (that's even worse than Windows if you ask me!).
Microsoft US a massive corporation with so many people, business units, departments.
A comment like yours is just like saying: "I know a buggy open-source software, why would I trust that other open-source project? The open-source community burned all possible goodwill".
Except that a company, no matter how heterogenous, has an overarching organization, whereas the open-source community doesn't.
There is no CEO of open source, there are no open-source shareholders, there are no open-source quarterly earnings reports, there are no open-source P&G policies (with or without stack ranking), and so on.
Microsoft doesn't have a very good track record with security or privacy. Maybe it works, but yeah you'll probably get screwed over at some point.
Still, the fact that it's open source is a good thing. People can now take that code and make something better (ripping out the AI for example) or just use bits and pieces for their own totally unrelated projects. I can't see that as anything but a win. I have no problem giving shitty companies credit where its due and they've done a good thing here.
What's dumb, on top of everything, is needing to store non special standard operating procedures in specific AI folders and files when wanting to work with AI tooling.
It is a standard in a sense that they will all read it (although last I checked you still need to adjust the default config with Gemini). But feature support varies between different tooling. For example, only Claude supports @including other files.
It doesn't say much really. At this point we can assume almost every project has some generated code in it. Unless you're sure that every single author hates the idea and there are no external contributions. Agent configuration just makes it clear.
> Extremely simple changes do not require explicit unit tests.
I haven't used Copilot much, because people keep saying how bad it is, but generally if you add escape hatches like this without hard requirements of when the LLM can take them, they won't follow that rule in a intuitive way most of the time.
Yeah, I tried various very sane-looking instrucions file when starting to use copilot 6 months ago. Turned out it was not really useful. It mostly follows the rules anyway, but it also often forgot to. So turns out, especially with the fast turnaround with models today, it was better to just forego these instructions files.
It's a library that is linked to in place of an operating system - so whatever interface the OS provided (syscalls+ioctls, SMC methods, etc.) ends up linked / compiled into the application directly, and the "external interface" of the application becomes something different.
This is how most unikernels work; the "OS" is linked directly into the application's address space and the "external interface" becomes either hardware access or hypercalls.
Wine is also arguably a form of "library OS," for example (although it goes deeper than the most strict definition by also re-implementing a lot of the userland libraries).
So for example with this project, you could take a Linux application's codebase, recompile it linked to LiteBox, and run it on SEV-SNP. Or take an OP-TEE TA, link it to LiteBox, and run it on Linux.
The notable thing here is that it tries to cut the interface in the middle down to an intermediate representation that's supposed to be sandbox-able - ie, instead of auditing and limiting hundreds of POSIX syscalls like you might with a traditional kernel capabilities system, you're supposed to be able to control access to just a few primitives that they're condensed down to in the middle.
> So for example with this project, you could take a Linux application's codebase, recompile it linked to LiteBox
If you have to recompile, you might as well choose to recompile to WASM+WASI. The sandboxing story here is excellent due to its web origins. I thought the point of LiteBox is that recompilation isn’t needed.
Looking more closely, it looks like there are some "North" sides (platforms) with ABI shims (currently Linux and OP-TEE), but others (Windows, for example), would still require recompilation.
> If you have to recompile, you might as well choose to recompile to WASM+WASI.
I disagree here; this ignores the entire swath of functionality that an OS or runtime provides? Like, as just as an example, I can't "just recompile" my OP-TEE TA into WASM when it uses the KDF function from the OP-TEE runtime?
I had previous experience with WASM on TEE. Just use the foreign function interface. Remember WASM isn’t native code so you still need other native code to run WASM (such as wasmtime), and you can import other native functions into WASM through the runtime.
Any pure code (WASM or otherwise) that does not perform any input/output is by definition useless. It consumes electricity to do computation and there is no way to communicate its results.
It's absolutely trivial to make a very strict sandbox - just a simple, mathematical Turing machine is 100% safe.
The hard part is having actual capabilities, and only WASI (which is much smaller than WASM) helps here, and it's not clear why would it be any better than other options, like LiteBox. Especially that wasm does have a small, but real overhead.
I think that's an OS in the form of a library, like Wine for example. From what I get from the description it allows you to run programs on your real OS and make it see a cut down API to your actual system to reduce the attack surface.
Aliens come to visit. I have to tell one the difference between an app linked against a "library os" running on a hypervisor, and an app running on a kernel. I couldn't do it with a straight face.
It'll be interesting if MS allows to write e.g. WFP callout drivers via LiteBox and not requiring attestation signing. It'll still work in kernel mode, unlike NetworkExtensions in MacOS.
I think parent poster was referring to an actual library, i.e. where you would borrow books.
That's also what I thought this was, and came to the comments expecting to see something neat about why libraries might need bespoke operating systems.
Ah right! Yeah, I did think that too..., like locked down so random patrons couldn't do this or that. I was thinking that was quite a pivot for MS though too...
yeah, same here, I was like "wow what an interesting side to their business, a whole operating system intended to serve public and academic libraries!"
A library OS is an OS that is linked directly to your program instead of being a separate program accessed through a syscall to kernel mode. About the same as a “unikernel”, but a more recent term.
Basically it lets your program run directly on a hypervisor VM, though this one will also run as a Linux/Windows/BSD process.
My understanding of this is that it is a sandbox. Providing a common interface like if it was an OS for the program to run inside, but avoiding the program to use the OS directly.
What is unclear is if it uses its own common ABI or if you use the one of the host os.
I don't know why but from the project description I have a little bit of feeling that this is another vibe coded project.
No mention of starting with a design specification & then tied to formal verification the whole way?
It sounds interesting and a step forward (never heard of library Os itll now), but why won't this run into hundreds of the same security bugs that plague Windows if it's not spec'd and verified?
The lack of integrated sandboxing in windows compared to android/iphone is still frankly unacceptable. I've become increasingly paranoid about running any application on Windows (not that your average linux distro is even remotely better) and yet Apple and Google seem to be far, far ahead in user permissions (especially with GrapheneOS, god bless that team) and isolation of processes.
Consumers and businesses deserve better. It's crazy to me that in 2026 Notepad++ being compromised means as much potential damage as it does, still.
The sandboxing on mobile platforms puts the OS vendor in a special position to enforce a monopoly on apps and features. Apple enforces it aggressively, while Google only reluctantly so far. It also prevents the user from exerting full control of the system. Apple does it by locking things down directly, while Google punishes you for owning your devices with attestation.
There has to be a better way. I think Linux's flatpak is a reasonable approach here, although the execution might be rather poor. I want a basic set of trusted tool that I can do anything with, and run less trusted tools like GUI programs in sandboxes with limited filesystem access.
Those are policy decisions not really connected to the sandboxing technology. They control what sort of signing the system will accept and make it so that it only runs things they approve, and they only approve things that are sandboxed a certain way. The exact same sandboxing could be used with a system where an admin user can decide what gets to run and what kind of sandboxing is required for each thing.
> I've become increasingly paranoid about running any application on Windows (not that your average linux distro is even remotely better)
Linux excels over Windows in the area of security by a wide margin, I have no qualms about running an app on Linux versus Windows, any day of the week.
No, Windows has consistently been ahead of Linux for many years in terms of average-user desktop security, from binary hardening to designs like secure desktop, because average Windows users do not typically have curated software selections, so you assume the worst. (When I wrote the original "binary hardening via compiler flags" RFC for NixOS over 10 years ago, almost everything in it was already done on Windows and had been for years.) It's still not ideal; macOS takes it even further and actually allows things like "storing secrets on disk in a way that can't be read by random programs" because it can e.g. make policy decisions based on code signatures, which are widely deployed. None of this exists in pretty much any Linux distro; you can literally just impersonate password prompts, simply override 'sudo' in a user's shell to capture their password silently, copy every file in $HOME/.config to your evil server, setuid by its very definition is an absolute atrocity, etc. Linux distros make it easy for people to live in their own chosen curated software set, but the security calculus changes when people want to run arbitrary and non-curated software.
You can make a pretty reasonably secure Linux server by doing your homework, it's nowhere close to impossible. An extremely secure server also requires a bit of hardware homework. The Linux desktop, however, is woefully behind macOS and Windows in terms of security by a pretty large margin, and most of it is by design.
(In theory you can probably bolt a macOS-like system onto Linux using tools like SCM_RIGHTS/pidfds/code signatures, along with delegated privilege escalation, no setuid, signature-based policy mechanisms, etc. But there are a lot of cultural and software challenges to overcome to make it all widely usable.)
> Linux excels over Windows in the area of security by a wide margin
No, this is wrong but might be true if you are talking about Linux package manager vs. Random Windows .exe on internet. But if you are talking about Secure Boot, encrypted disk, sudo etc. Windows is more secure but it looks like https://amutable.com/ will make Linux more secure like Windows.
Edit: Some insecure things on Linux: Dbus (kwallet etc.), sudo, fprint, "secure boot".
And executable you run has access to any file in your home directory, including SSH private keys, secrets in config files, browser cookies, passkeys—all of it. That includes the thousands of npm modules installed as a transient dependency of at least one tool you use that brings node as a dependency.
Windows at least has a proper ACL system; on Linux it just takes a single compromised executable to loose everything.
Nope, that's a very fair poke at MS. They've gone so far into AI adoption that it's become absurd.
- They have VPs posting on Linkedin about rewriting existing code using AI and adhering to arbitrary metrics of a x% rewrite and laying off y% of engineers that used to work on it.
- Renaming one of their major flagship product lines (MS Office) to (MS Copilot Apps 365).
- Forcing AI features on users despite not wanting it, and overriding OS configuration that should turn it off.
- Executives publicly shaming the general public for not wanting "all the AI all the time".
A library os to me would typically mean it's aimed at hosting a single user program on bare hardware. I don't see that here, but maybe I'm just confused
It's both; it's aimed at hosting a single user program on another userspace, but also seems to have its own kernel as well?
The "North" part seems to be what I think you'd traditionally think of as a library OS, and then the "South" part seems to be shims to use various userlands and TEEs as the host (rather than the bare hardware in your example).
I'm really confused by the complete lack of documentation and examples, though. I think the "runners" are the closest thing there is.
A library OS is an operating system design where traditional OS services are provided as application-linked libraries, rather than a single, shared kernel serving all the programs.
IIUC, if you have the source you can recompile said Windows app with LiteBox to statically link in the Windows OS kernel dependencies, so it'll run on any compatible processor regardless of OS (since it won't be making syscalls anymore). It's a unikernel basically.
That's the theory, but I don't know how far LiteBox is along to supporting that workflow.
> It focuses on easy interop of various "North" shims and "South" platforms.
For replacing wine on Linux the "North" would be kernel32 API or similar, the "South" would be Linux sys all API.
However this is meant as a library, thus require linking the Windows program to it and eine is more than the system interface, it has all the GUI parts etc of win32 API
I know we're not supposed to complain about comment quality, but -- I came here to look for interesting technical analysis but instead it's Slashdot level snipes about Microsoft the company. And yes, I also dislike Windows and Microsoft generally but this looks like a very interesting project and I'm frankly frustrated at the level of discussion here, it's juvenile. This has nothing to do with Windows, and it looks like most people didn't even read past the title.
I'll play with this later today after work and see how mature it is and hopefully have something concrete and constructive to say. Hopefully others will, too.
I am with you on that. HN is becoming a "14 years old edgy mini-tech" Facebook.
"Microsoft bad, Linux good" kind of comments are all over the place. There is no more in depth discussions about projects anymore. Add the people linking their blogs only to sell you thier services for an imaginary problem, and you get HN 2026.
It's maybe the time to find another tech media. If you know one, I would be glad to know.
I read this type of (sour) comment more and more on this forum. To me it reads very cynical and I wonder what the author is trying to say with this. Are you perhaps negatively impacted by automatic coding?
I read your comment as ignorant to AI's capabilities and their negative outcomes with relying on vibe coding.
The implication is that MS is forcing AI adoption on users at a point of absurd recklessness, and that they should not be trusted - especially not blindly trusted.
Perhaps the reason you're seeing comments similar to my original comment more frequently is because actual software engineers whom know the capabilities of AI and how much of a bad decision it is to assume it's as good as a competent engineer. Many engineers have had years of experience working with management, whom while have legit concerns about the capabilities of software as they are ultimately responsible for it and the financials, see them turning to vibe coding and relying on it. Non technical folks think software is kinda easy to do, and because LLMs can generate code that it just proves their assumptions.
Can you define “non-technical folks” for me? Because last I checked there are a LOT of “technical” people who aren’t software engineers, and a lot of “non-technical” people who don’t believe software is “kinda easy to do.”
It’s really frustrating to see comments like this with absolutely zero sourcing, but just stated as fact.
It’s giving “I saw ads on LinkedIn and it made me anxious about a world where I can’t make six figures being in control of what tools people have access to”
I'm not sure whether Microsoft, the makers of Windows 95 (after which I stopped taking them seriously), are the sharpest tool in the box when it comes to security.
Library Operating System (LibOS) is a type of operating system that runs in the address space of applications, allowing for a small, fixed set of abstractions to connect the library OS to the host OS kernel. This approach offers the promise of better system security and more rapid independent evolution of OS components. LibOS can run significant applications, such as Microsoft Excel, PowerPoint, and Internet Explorer, with significantly lower overhead than a full VM. It can also address many of the current uses of hardware virtual machines at a fraction of the overheads.
1
LibOS is lightweight, with extremely short startup time, and can be used to run Linux programs, making it a versatile option for various applications. It is designed to provide compatibility and sandboxing without the need for VMs, making it a lightweight alternative to containers and VMs.
1
The Library Operating System for Linux was announced on the Linux kernel mailing list, indicating its official recognition and support within the Linux community.
LiteBox is a sandboxing library OS that drastically cuts down the interface to the host, thereby reducing attack surface. It focuses on easy interop of various "North" shims and "South" platforms. LiteBox is designed for usage in both kernel and non-kernel scenarios.
LiteBox exposes a Rust-y nix/rustix-inspired "North" interface when it is provided a Platform interface at its "South". These interfaces allow for a wide variety of use-cases, easily allowing for connection between any of the North--South pairs.
Example use cases include:
This might actually be my favourite use: I always thought WSL2 was a kludge, and WSL1 to be somewhat the fulfilment of the "personality modules" promise of Windows NT.
It was sad to see WSL2 taking the path of least resistance, that decision has always felt TPM driven ("we got unexpected success with WSL and people are asking for more, deliver xxx by Q4! No I don't care _how_ you do it!")
Reddit discussion: https://www.reddit.com/r/linux/comments/1qw4r71/microsofts_n...
Project lead James Morris announcing it on social.kernel.org: https://social.kernel.org/notice/B2xBkzWsBX0NerohSC
Individual engineers can be talented, professional, and end-user focused. Most of that effort gets lost when PMs refuse to work with each other in a coherent manner. Most of the major issues we ran into weren’t engineering bugs per se, they were the result of management refusing to allow teams to communicate effectively.
When we were first building out the original C# functionality, the C# team refused to talk to the existing compiler teams. I spent more time acting as a go-between than I did solving actual technical problems.
Good people can produce crappy software in that environment.
Teams, Office (especially online), One Drive, SharePoint, Azure, GitHub, LinkedIn, all became very shitty and partially unusable with increasing number of weird bugs or problems lately.
WFH, flood of Dev hiring, increasingly hostile worker relations, a bunch of web 2.0 folks finally retiring, VC money drying up...
take your pick.
Software is just crappy these days.
/sarcasm
Azure makes money, 50% of Windows computers are basically free and need to get you to sign up for a subscription some how. The other 50% are Windows Pro/Enterprise, but MS assumes they'll get that money forever so doesn't put any resources into that. In 10 years the kids switching to Linux on desktop today will be in charge of the business deals and switch corporations to linux because they're not scared of it like the current business IT leaders
I know a few personally that left their stable job to be hired and fired in the same month and remain unemployed six months later. Very sad.
Working for Microsoft doesn’t make them bad engineers or bad people, but it does make them Microsoft employees. And they get to bear its reputation whether they want to or not. If it makes them uncomfortable then they should make a change or grow thicker skin.
Oversaturation of the labor supply for software engineers has been looming for a while now. Gen Z was sold on infinite growth in the ZIRP era which was never going to happen, but everyone still jumped in. What we’re seeing is structural unemployment. Not everyone’s gonna make it.
If you do, I can't agree with you.
Also I wouldn't compare software development for a marketing company with a violent disagreeable effort. There's bad and there's worse, objectively.
Anyway, not saying you're wrong, but I'm not so quick to judge someone by a job that they probably hate.
Or to wrap 100,000 people in the same blanket. We're all individuals. No one should be judged by the actions of others.
1400 ISIS (the islamist state) terrorists who made their way to the US, identified by the DHS.
https://www.dhs.gov/wow
Look at the list here. 2084 pages already, 12 entries per page: that's 25 000 criminals. They're listing their crimes. 25 000 criminals already arrested is a huge lot.
Be honest with yourself and think about the victims.
I'd say a lot of the people joining ICE do believe the US has already enough criminals that are US citizens and want to help stop the insanity that is mass uncontrolled migration.
Out of 600 000 people arrested by ICE, as I understand it already 25 000 are violent criminals that we know of. That's more nearly 5% of all those arrested. 1 in 20 people.
Where do you draw the limit? You want full open borders, but at what cost?
I read a lot of "Arrested for: kidnapping, rape".
Is, say, 1 in 100 people coming in being a criminal OK?
Where do you draw the line?
Dems are literally fighting so that sanctuary cities do not hand over convicted criminals to ICE: so that one day they can be released in the streets.
Is this what you want to fight for?
Are you that convinced, from your moral high ground where you judge Microsoft employees and ICE agents, that you'll be on the right side of history?
The real justice is investing in a security system that tracks, investigates, and condemn actual criminals, in a targetted way, so that honest people can live securely and free. Believe it or not, plenty of countries manage to do that pretty well.
You ask "Is, say, 1 in 100 people coming in being a criminal OK?"
Well considering that about 1.4% of the overall population is current incarcerated in our "Land of the Free", yeah 1 in 100 would be an improvement!
People are against ICE in growing numbers because of their tactics of run around hide their identities like bandits and gestapo thugs. Their ignoring of court orders, constant lies, constant blatant violations of the 1st, 2nd, 4th amendments constantly, and violations of rights of people such as immigrants following the processes of asylum, several citizens that have been arrested wrongly, and the terrible tortuous treatment an the joy and pride this corrupt disgusting administration takes in being cruel to people!
Yes.
It really isn't difficult to figure out who the bad guys are, at the moment.
I haven't spoken to a Microsoft developer in a while because there are few in the hacker communities I'm around (go figure?) so not entirely sure though. I want to understand.
Microsoft has some clear ‘A’ teams (compilers, industry leading languages, F*, pioneering web tech, OS innovations, etc), but also ‘B’, ‘C’ and ‘D’ teams, and MS is often reactively chasing industry trends. They’re industry leaders, but also victims of their Office, Windows, and Cloud teams pooping on one another at critical market junctures.
In .Net land we can inspect their library code. A number of these ‘Enterprise’ packages around their ‘Enterprise’ solutions are … just passable. Often something you’d write a proper version of to avoid clear issues. When our juniors are delivering better than their official offerings, in light of wizardry being displayed elsewhere, I think we are seeing systematic effects of corporate culture and customer base.
This. A while ago a build of Win 11 was shared/leaked that was tailored for the Chinese government called "Windows G" and it had all the ads, games, telemetry, anti-malware and other bullshit removed and it flew on 4GB RAM. So Microsoft CAN DO IT, if they actually want to, they just don't want to for users.
You can get something similar yourself at home running all the debloat tools out there but since they're not officially supported, either you'll break future windows updates, or the future windows updates will break your setup, so it's not worth it.
https://www.windowscentral.com/software-apps/windows-11/leak...
So they are not incentivized to keep Win32_Lean_N_Mean, but instead to put up artificial limits on how old of hardware can run W11.
I have no insider knowledge here, just this is a thing which get talked about around major Windows releases historically.
This was most evident back in the 90s when they shipped NT4: extremely stable as opposed to Win95 which introduced the infamous BSOD. But it supported everything, and NT4 had HW support on par with Linux (i.e. almost nothing from the cheap vendors).
9x, me, and even compatibility parts of XP (up to some service patch IIRC? Might have been SP2) would still allow dos mode realtime BS for any driver that wanted.
I loath all the dang software modems too cheep to ship a decent device in a single unit and instead slice off the user's already constrained resources.
Citation needed since that makes no logical sense. You want to sell your SW product to the most common denominator to increase your sales, not to a market of HW that people don't yet have. Sounds like FUD.
>but instead to put up artificial limits on how old of hardware can run W11
They're not artificial. POPCNT / SSE4.2 became a hard requirement starting with Windows 11 24H2 (2024) (but that's for older CPUs), and only intel 8th gen and up have well functioning support for Virtualization-Based Security (VBS), HVCI (Hypervisor-protected Code Integrity), and MBEC (Mode-Based Execution Control). That's besides the TPM 2.0 which isn't actually a hard requirement or feature used by everyone, the other ones are way more important.
So at which point do we consider HW-based security a necessity instead of an artificial limit? With the ever increase in vulnerabilities and attack vectors, you gotta rip the bandaid at some point.
A key difference between regular software and Windows is that almost nobody buys Windows, they get it pre-installed on a new PC. So a new PC purchase means a new Windows license.
What is missing here that was present when this same computer was running Windows 10?
Yes, you can bypass HW checks to install it on a pentium 4 if you want, nothing new here.
>What is missing here that was present when this same computer was running Windows 10?
All the security features I listed in the comment above.
This computer had the security features that you listed while it was running Windows 10, and now that it is running Windows 11 it is lacking them?
(I'm not trying to be snarky. That's simply an astonishing concept to me.)
> > What is missing here that was present when this same computer was running Windows 10?
> All the security features I listed in the comment above.
Are they as important as stated? Microsoft says so. Everyone here loves and trusts them, right?
I'm running 11 IoT Ent LTSC on a some T420; it runs pretty okay.
In their intended applications, which might or might not be the ones you need.
The slowness of the filesystem that necessitated a whole custom caching layer in Git for Windows, or the slowness of process creation that necessitated adding “picoprocesses” to the kernel so that WSL1 would perform acceptably and still wasn’t enough for it to survive, those are entirely due to the kernel’s archtecture.
It’s not necessarily a huge deal that NT makes a bad substrate for Unix, even if POSIX support has been in the product requirements since before Win32 was conceived. I agree with the MSR paper[1] on fork(), for instance. But for a Unix-head, the “good” in your statement comes with important caveats. The filesystem is in particular so slow that Windows users will unironically claim that Ripgrep is slow and build their own NTFS parsers to sell as the fix[2].
[1] https://lwn.net/Articles/785430/
[2] https://nitter.net/CharlieMQV/status/1972647630653227054
https://github.com/Microsoft/WSL/issues/873#issuecomment-425...
But there's another issue which is what cripples windows for dev! NTFS has a terrible design flaw which is the fact that small files, under 640 bytes, are stored in the MFT. The MFT ends up having serious lock contention so lots of small file changes are slow. This screws up anything Unixy and git horribly.
WSL1 was built on top of that problem which was one of the many reasons it was slow as molasses.
Also why ReFS and "dev drive" exist...
Ext4 also stores small (~150B) files inside the inode[1], and so do a number of other filesystems[2]? NTFS was unusually early to the party, but if you’re right that it’s problematic there then something else must also be wrong (perhaps with the locking?) to make it so.
[1] https://www.kernel.org/doc/html/latest/filesystems/ext4/inli...
[2] https://en.wikipedia.org/wiki/Comparison_of_file_systems#All..., the “Inline data” column.
Not true. There are increasingly more cases where Windows software, written with Windows in mind and only tested on Windows, performs better atop Wine.
Sure, there are interface incompatibilities that naturally create performance penalties, but a lot of stuff maps 1:1, and Windows was historically designed to support multiple user-space ABIs; Win32 calls are broken down into native kernel calls by kernel32, advapi32, etc., for example, similar to how libc works on Unix-like operating systems.
Also, as far as my (very limited) understanding goes, there are more architectural performance problems than just filters (and, to me, filters don’t necessarily sound like performance bankruptcy, provided the filter in question isn’t mandatory, un-removable Microsoft Defender). I seem to remember that path parsing is accomplished in NT by each handler chopping off the initial portion that it understands and passing the remaining suffix to the next one as an uninterpreted string (cf. COM monikers), unlike Unix where the slash-separated list is baked into the architecture, and the former design makes it much harder to have (what Unix calls) a “dentry cache” that would allow the kernel to look up meanings of popular names without going through the filesystem(s).
From there, it hits the MFT, finds the specific record for the file, loads the MFT record, and ultimately returns the FILE_OBJECT to the I/O Manager and it bubbles up the chain back to (presumably) Win32. The MFT is just a linear array of records, which include file and directories (directory records are just a record with directory = true, essentially).
Obviously simplified. Windows Internals will be your friend, if you want to know more.
[1] https://www.kernel.org/doc/html/latest/filesystems/path-look...
[2] I was under the impression that it could look up an entire path at once when I wrote my grandparent comment; it seems I was wrong, which on reflection makes sense given you can move directories.
[3] https://www.kernel.org/doc/html/latest/filesystems/path-look...
If even MS internal teams rather want to avoid it, it seems like it isn't a great offering. https://news.ycombinator.com/item?id=41085376#41086062
Remember, I said the _file system_ was just fine. It's that extensible architecture above all file systems on NT that causes grief.
The only method to 'turn off' Defender is to use DevDrive, which enforces ReFS, and even then you only get async Defender, it's not possible to completely disable.
> Example use cases include:
> * Running unmodified Linux programs on Windows
> * ...
That won't work if the unplugged Linux program assumes that mv replaces a file atomically; ntfs can't offer that.
You can read more if you wish in 'Inside the Windows NT File System' by Helen Custer, page 15.
OP wasn't suggesting it was, just that the lack of quality in one significant area of the company's output leads to a lack of confidence in other products that they release.
It does sound hard, and might need to employ homomorphic encryption with hw help for any memory access after code has been also verifiably unaltered through (uncompromised) hw attestation.
A comment like yours is just like saying: "I know a buggy open-source software, why would I trust that other open-source project? The open-source community burned all possible goodwill".
There is no CEO of open source, there are no open-source shareholders, there are no open-source quarterly earnings reports, there are no open-source P&G policies (with or without stack ranking), and so on.
Still, the fact that it's open source is a good thing. People can now take that code and make something better (ripping out the AI for example) or just use bits and pieces for their own totally unrelated projects. I can't see that as anything but a win. I have no problem giving shitty companies credit where its due and they've done a good thing here.
https://github.com/microsoft/litebox/blob/main/.github/copil...
I haven't used Copilot much, because people keep saying how bad it is, but generally if you add escape hatches like this without hard requirements of when the LLM can take them, they won't follow that rule in a intuitive way most of the time.
As agent, or writing everything for me, not yet.
This is how most unikernels work; the "OS" is linked directly into the application's address space and the "external interface" becomes either hardware access or hypercalls.
Wine is also arguably a form of "library OS," for example (although it goes deeper than the most strict definition by also re-implementing a lot of the userland libraries).
So for example with this project, you could take a Linux application's codebase, recompile it linked to LiteBox, and run it on SEV-SNP. Or take an OP-TEE TA, link it to LiteBox, and run it on Linux.
The notable thing here is that it tries to cut the interface in the middle down to an intermediate representation that's supposed to be sandbox-able - ie, instead of auditing and limiting hundreds of POSIX syscalls like you might with a traditional kernel capabilities system, you're supposed to be able to control access to just a few primitives that they're condensed down to in the middle.
If you have to recompile, you might as well choose to recompile to WASM+WASI. The sandboxing story here is excellent due to its web origins. I thought the point of LiteBox is that recompilation isn’t needed.
> If you have to recompile, you might as well choose to recompile to WASM+WASI.
I disagree here; this ignores the entire swath of functionality that an OS or runtime provides? Like, as just as an example, I can't "just recompile" my OP-TEE TA into WASM when it uses the KDF function from the OP-TEE runtime?
The hard part is having actual capabilities, and only WASI (which is much smaller than WASM) helps here, and it's not clear why would it be any better than other options, like LiteBox. Especially that wasm does have a small, but real overhead.
Honestly far less interesting to know I was wrong.
That's also what I thought this was, and came to the comments expecting to see something neat about why libraries might need bespoke operating systems.
Basically it lets your program run directly on a hypervisor VM, though this one will also run as a Linux/Windows/BSD process.
What is unclear is if it uses its own common ABI or if you use the one of the host os. I don't know why but from the project description I have a little bit of feeling that this is another vibe coded project.
It sounds interesting and a step forward (never heard of library Os itll now), but why won't this run into hundreds of the same security bugs that plague Windows if it's not spec'd and verified?
Is it similar to e.g. gVisor? Like would gVisor count as a library OS, too?
Consumers and businesses deserve better. It's crazy to me that in 2026 Notepad++ being compromised means as much potential damage as it does, still.
There has to be a better way. I think Linux's flatpak is a reasonable approach here, although the execution might be rather poor. I want a basic set of trusted tool that I can do anything with, and run less trusted tools like GUI programs in sandboxes with limited filesystem access.
There is also sandboxing configuration via Intune for enterprises.
Linux excels over Windows in the area of security by a wide margin, I have no qualms about running an app on Linux versus Windows, any day of the week.
You can make a pretty reasonably secure Linux server by doing your homework, it's nowhere close to impossible. An extremely secure server also requires a bit of hardware homework. The Linux desktop, however, is woefully behind macOS and Windows in terms of security by a pretty large margin, and most of it is by design.
(In theory you can probably bolt a macOS-like system onto Linux using tools like SCM_RIGHTS/pidfds/code signatures, along with delegated privilege escalation, no setuid, signature-based policy mechanisms, etc. But there are a lot of cultural and software challenges to overcome to make it all widely usable.)
No, this is wrong but might be true if you are talking about Linux package manager vs. Random Windows .exe on internet. But if you are talking about Secure Boot, encrypted disk, sudo etc. Windows is more secure but it looks like https://amutable.com/ will make Linux more secure like Windows.
Edit: Some insecure things on Linux: Dbus (kwallet etc.), sudo, fprint, "secure boot".
Windows at least has a proper ACL system; on Linux it just takes a single compromised executable to loose everything.
* Many of them are part of families of crates maintained by the same people (e.g. rust-crypto, windows, rand or regex).
* Most of them are popular crates I'm familiar with.
* Several are only needed to support old compiler versions and can be removed once the MSRV is raised
So it's not as bad as it looks at first glance.
If Microsoft states that they don't have any for a project like this, I would be wary of taking it too seriously.
- They have VPs posting on Linkedin about rewriting existing code using AI and adhering to arbitrary metrics of a x% rewrite and laying off y% of engineers that used to work on it.
- Renaming one of their major flagship product lines (MS Office) to (MS Copilot Apps 365).
- Forcing AI features on users despite not wanting it, and overriding OS configuration that should turn it off.
- Executives publicly shaming the general public for not wanting "all the AI all the time".
Edit: Also, beware of the unsorted uniq count:
The "North" part seems to be what I think you'd traditionally think of as a library OS, and then the "South" part seems to be shims to use various userlands and TEEs as the host (rather than the bare hardware in your example).
I'm really confused by the complete lack of documentation and examples, though. I think the "runners" are the closest thing there is.
A library OS is an operating system design where traditional OS services are provided as application-linked libraries, rather than a single, shared kernel serving all the programs.
That's the theory, but I don't know how far LiteBox is along to supporting that workflow.
> It focuses on easy interop of various "North" shims and "South" platforms.
For replacing wine on Linux the "North" would be kernel32 API or similar, the "South" would be Linux sys all API.
However this is meant as a library, thus require linking the Windows program to it and eine is more than the system interface, it has all the GUI parts etc of win32 API
Use Linux or BSD and ignore that approach for Vendor Lock-in* into their “library OS”.
I'll play with this later today after work and see how mature it is and hopefully have something concrete and constructive to say. Hopefully others will, too.
"Microsoft bad, Linux good" kind of comments are all over the place. There is no more in depth discussions about projects anymore. Add the people linking their blogs only to sell you thier services for an imaginary problem, and you get HN 2026.
It's maybe the time to find another tech media. If you know one, I would be glad to know.
I wonder if they, the industry as a whole, eventually will make being able to freely use a PC a subscription, bastardizing "freedom" completely.
I have to use Windows at my day job
and my god, I'd prefer Windows 3.1
I read your comment as ignorant to AI's capabilities and their negative outcomes with relying on vibe coding.
The implication is that MS is forcing AI adoption on users at a point of absurd recklessness, and that they should not be trusted - especially not blindly trusted.
Perhaps the reason you're seeing comments similar to my original comment more frequently is because actual software engineers whom know the capabilities of AI and how much of a bad decision it is to assume it's as good as a competent engineer. Many engineers have had years of experience working with management, whom while have legit concerns about the capabilities of software as they are ultimately responsible for it and the financials, see them turning to vibe coding and relying on it. Non technical folks think software is kinda easy to do, and because LLMs can generate code that it just proves their assumptions.
It’s really frustrating to see comments like this with absolutely zero sourcing, but just stated as fact.
It’s giving “I saw ads on LinkedIn and it made me anxious about a world where I can’t make six figures being in control of what tools people have access to”
https://news.ycombinator.com/item?id=45077654 - "Generated comments and bots have never been allowed on HN"
LibOS is lightweight, with extremely short startup time, and can be used to run Linux programs, making it a versatile option for various applications. It is designed to provide compatibility and sandboxing without the need for VMs, making it a lightweight alternative to containers and VMs. 1
The Library Operating System for Linux was announced on the Linux kernel mailing list, indicating its official recognition and support within the Linux community.