This is going to be a huge chilling factor for employees. You’d no longer be able to disent, or discuss anything non-work related with even the slightest expectation of privacy.
Yes they could have accessed logs before but there’s a difference between directed checking after incidents and active surveillance at scale.
As much as it's funny to dunk on meta this type of surveillance is becoming the norm. Failed start ups are selling all their emails, chats, commits, etc for companies to train on. Most job offers now come with statements about how you don't have right to your likeness, or your personal network I think most people assume that's for photo ops, but ... Yea. I expect more and more of this. products and product features rolling out with this as a focus
Companies have shown us that IP going to AI providers is acceptable. Once you cross that line your thought workers are assets not people.
Idk in the US but in France you are allowed to have personal data on your work computer.
Though you have to label it as personal (like creating a « Personal » folder or label and your employer can still access it in case of suspicion but he must do it in your physical presence and accompanied with a witness, generally a representative of the employees.
So you theoretically don’t have full privacy on this computer but you can’t be sanctioned for this usage.
I don't think we have sweeping regulations about it, at least in California.
Most companies I've worked at have a policy of some "reasonable personal use" being permitted. The concern is usually focused on the other way around: Companies do not want their IP on your personal machines.
They can certainly look at whatever is on their own machines, however, regardless if it is your personal data or not.
One large caveat: If you do any work on your company's equipment, they may possibly own it, no matter how relevant it is to the company. It's one of the legal tests used to judge the ownership of your work.
It is even worse in France: if you code open source "on the side" of you work, at home, the company which employs you may claim the copyrights of it. I had to add explicit exclusion of this claim of copyrights in my job contracts to protect my personal work.
That was a few years back, dunno if that was fixed.
AFAIK it's the same in the USA, that's why one of the first questions when interviewing with a company is to ask them about their moonlighting policy if you do want to work on a side project.
Same in Germany, although the employer can forbid this but needs to do this explicitly. Most employers don't forbid personal data on work machines or using your work email for personal things.
Stuff like this is why France has a ceiling on the market cap of GenAI companies it produces. Imagine if Huggingface/Mistral could fully operate in a low-regulation environment.
Enjoy your red tape frogs. "Live to work" anglo protestant work ethic followers will complete the necessary economic destruction of rude "work to live" cheese eating surrender monkeys.
This is our payback for Charles de Gaulle, Foucault, and Jacques Lacan (it's hard to rank these three based on damage done to western society)
Not having AI companies is reasonable trade off for not having all of my data including full DNA sequence being recorded 24/7 with absolutely zero care of privacy or protection and shared with everyone who has some marginal amount of money to buy it.
Thats... a poorly crafted mumble jumble without any underlying sense, even ignoring insults. Can't handle existence of society where quality of life is higher priority (and you see it on the ground very well) than some sum on account or meaningless titles and rat race achievements or office zero sum games?
It's obviously an unwitting parody account. Calling yourself "Der Einzige" while reciting an incoherent script of internet clichés is indistinguishable from satire -- hilariously unintentional parody.
Already 10 years ago, I got an email from a webshop I used to use once, informing me they were closing down. They'd happily sell the customer database to me, if I were interested. Mind you, they were so desperate that they made this offer to all their customers. Its anecdotal, and only tangentially related. But my point is, companies blatantly selling your data isn't exactly a new thing, and not really AI related either. They are doing this since a long time, but usually got less publicity.
I know right, so much pain and horror has been unleashed in the world by Meta… I have zero sympathy for their employees. Someone should’ve said no to developing this tech in the first place but here we are.
It's not like people have an unlimited number of places to work, even if they have Meta on their resume. Many of my colleagues (and myself included) had struggled in the job market in the past before landing at Meta. If it's work for Meta, or suffer more tumult in the hiring market; it's easy to understand why many might decide to take the offer even with the moral implications. I used to bring up politics in the office with coworkers and many people are simply unaware of the consequences of the company's products. There are a few different categories that these people fall into, but the main ones I saw in the office:
1) Chinese H1B holders who are happy to be working in the US at all, and generally apolitical (or view anything as better than the status quo of where they come from)
2) Just normal people who are interested in their own lives and have never been trained to think about the world in a big picture way (some overlap between 1&2 exist of course)
It's very western of us to always be tracking the conseqentiality of our actions even when we're just the cog in a wheel at BigCo. I think that it's the right thing to do, but this sort of reasoning largely absent in eastern cultures, or even for some in the west—even among those who are well educated. It's kind of hard to blame individuals when they either are rightfully consumed by worrying about their own welfare or are for whatever reason not as seminally hyperaware or woke as we can be in the west. Growing up I liked imposing my political philosophies onto everyone; maturity is understanding that even objectively righteous values are only useful for the right types of minds.
On the contrary, once someone has truly been made aware of the ramifications of their actions; it's more difficult for me to extend my sympathy to them. I consider mark and priscilla to be fully implicated based on their exposure to the harm that they're actively, willingly, knowingly causing. Other employees may never get that memo, though, people obviously avoid political talk in the workplace.
Feels good to read the "ex-"-part in your sentence. It'd be analog to my supervisor sitting right behind me and keeping a super dense protocol - no fucking way, ever.
This is a naive take on this. Do you think it stops with just metamates(lmao that’s what they call themselves) being surveilled? Nope. This is the exact type of thing that software IC’s should reject in solidarity. Being happy with BadCompanyX trampling employee expectations directly allows for GoodCompanyY to enact the same policies.
I'm happy to see the metamates (lol) receiving the same pain they inflict on others. Maybe it will teach them a lesson in solidarity.
You can't have solidarity about a bad thing with the people who are doing the bad thing! They have to stop doing the bad thing first! That's how solidarity works!
Don't expect any solidarity to come from such people, they literally sold out humanity for slightly higher salaries. They made their beds, least they can do is feel bad.
Why do you think they don't fully know what they are doing, they are smart folks. Now we all know how everybody needs to be the hero of their story, but self-lying only gets you so far in life, sub-consciousness will give you shit.
Don't put some mystery where simple greed is perfect enough explanation and there is little worry about others, some could use the word 'selfish' too. US society at large seems to me structured that way - there is no social net for the unlucky, healthcare also varies a lot based on disposable cash/job, good education is only for rich.
I thought mass quitting in solidarity would happen when programmers realize how their work is used to train AI and replace them. How many quit because of that? Doesn't seem like many.
Apparently, money wins over principles for 99% of us. How is this different and how are we better than Meta employees?
I don't think the two things are comparable. While it would be inconvenient for me personally if I was replaced by AI, it would be an enormous social good as the resources saved could go somewhere else. The same could not be said about everyone under constant surveillance by some megacorp or the government.
Maybe in 2010 or 2015, but in 2026? Nobody is quitting their high paying job when the job market is this rough. A bubble has burst and there just are not the tech jobs out there that there used to be.
And employers know this, so they are enacting all kinds of draconian policies because they know employees know that they can't just leave the job and also keep their families fed.
job market is 2019 levels this rhetoric is nice, but doesn't stack up. yes it's not 2021 levels which is where they over hired and hired a bunch of people they would not have hired before then.
If only there was some way where workers in this profession could form some type of JOIN(but like a vertical version?) between different sets of workers, even crossing company boundaries, so that workers could coordinate to ensure that everyone would be quitting at once, and therefore have any power at all to block anti-worker edicts.
No. It would be best if it included the higher-ups too. I think we all just assume that the c-suite, and anyone who might talk to the legal department, are exempted. And HR (medical info). Or maybe meta is just that stupid that they havent.
There are large organizations at Meta focused on basic research & design (FAIR, Open Compute, PyTorch, etc) and giving back to the community. Not everyone is maximizing revenue.
Like all of us these people make a cost-benefit analysis when it comes to their choice of employer and how much it suits their purposes and personal priorities like giving back to the community.
This is just another factor they’ll have to grapple with in their analysis.
I’m sure some of them will find it a bridge too far but not enough to really matter. The work will continue as will the expansion of Meta and the negative externalities that it produces.
I already assume that on a work computer everything I'm doing could be monitored by work IT. At every job I've had, I've made a point of not using work hardware for anything I even remotely thought someone at the job might object to. Instead I use my own hardware for that kind of thing - I own a smartphone, I own multiple computers, this is not hard to do.
When I worked at a startup that had some internal conflict between the software engineers and management, someone made a Signal group to chat about the issues among the software engineers privately and everyone joined that group with their own Signal accounts, without any kind of issue.
This actually came up with multiple companies I worked at in Sweden. Apparently the law here is quite strict that you _can_ use your computer for personal matters and that your employer is not allowed to spy on you on those matters.
So they can monitor your email and slack server-side, but not your client-side stuff that doesn't touch their servers. However if you use a VPN then they can also monitor your DNS requests and every website you visit. Any kind of client-side telemetry is limited to a few things, however those things can involve what applications you have installed (like spotify) for security reasons or USB sticks plugged in.
Not really from the perspective of my own risk/reward calculation. I don't know in advance what's going to be considered an "incident" that will make corporate IT suddenly want to search my work computer. Better to simply have a policy of never using a computer my work controls for personal data, especially when I already have my own computers for that that I use regardless of what job I happen to be working at.
Keep in mind this isn't just about personal data on work hardware. It also leads to things like "we noticed you didn't move your mouse or type anything for 45 minutes, what were you doing?" type of micromanagement.
Yes, but I cannot imagine Meta cares about chilling their employees. They're deep into the "extract more value" phase and are no longer bringing in the cutting edge talent.
at this point employees should be kept in cold storage to acclimate so as to prevent being shocked from any more chilling announcements. also will cut down on bathroom breaks
Until the day when Zuckerberg meets you, and his Ray Ban glasses profile your face and pull up that comment on your exit interview as pertinent information.
His eyes glaze over and he just reads that instead in his corner vision instead of listening to
you, and you get snubbed forever more
> I insulted him in my mandatory Exit Interview form from HR when I resigned.
How can they legally mandate an exit interview when you resigned? Is it part of the employment contract? What would have happened if you showed them the finger and not participated?
Nothing happens, it’s optional. However if you want to be able to be rehired it doesn’t hurt to do it. It doesn’t take long and you don’t really have to say anything.
In my experience at other companies recruiters and pretty much no one else has any idea that someone has been blacklisted, until you do all of your interviews and tell HR to hire that person and that's when they tell you the person is on some kind of shit list and we can't hire them. That was an awkward conversation with someone who was basically told we'll be making an offer soon.
No it was company specific. Basically that person used to work for our company, years prior, in a different office in a different country.
But I also had a different situation where we also decided to hire someone, only to find out that we can't because he's been let go from another company owned by our parent company, and his severance agreement said he can't work for the same group of companies for 12 months. I think he was genuinely unaware that we're part of the same group(if was a huge corporation) and it just never came up in any conversation until HR tried to put together paperwork for him.
What? Hiring is a contract between employer (company entity) and employee. No individual "you" can hire anybody except through the company's official process. If HR says "no we won't extend an offer," a lowly HM extending an offer would be clear-cut fraud.
Managers usually have the authority to bind the company to an employment contract. Even if they don't, the rule of "apparent authority" often means the employee can still sue.
In the USA this is mostly theoretical since HR could immediately fire the employee due to at-will employment.
But in Canada, it's a much bigger issue due to labour protections.
e.g. Many managers at American multinationals gave assurances over email to employees about work-from-home arrangements. Then the company does a huge RTO push.
When the employee refuses, HR discovers they can't fire the employee without a hefty buyout.
Best not to give assurances if you're managing a multinational team.
>>Managers usually have the authority to bind the company to an employment contract
Is that an American thing? I've been a manager for years and never heard of that happening. I didn't even know how much the people I managed were paid.
I believe it happens more often in Canada. Here's a case where the RTO ultimatum was ruled constructive dismissal, because the manager made a verbal agreement to amend the terms of employment.
Don't confuse employees with execs. It's a gigantic company with almost 80k employees.
Most cultures around the world are acutely aware that the actions and opinions of their leaders are not a reflection of behaviors and opinions of regular citizen.
Question: I have heard that at some tech companies that use internal chat software, the general practice is for IT to set it so that the messages are automatically deleted at the end of the day. In Google Chat this is a feature called "turn off history", and the idea behind it is that it can reduce a paper trail when there are investigations into the company doing something that's potentially monopolistic or otherwise shady.
If keystrokes are captured, isn't this a double-edged sword where maybe the company might be inadvertently collecting evidence against itself if there's an investigation and the investigators want to collect keystrokes?
Any fallout or monetary changes you could sue for, a company like Meta can probably pay for and still turn their huge profits. It seems like these companies do little to hide their shady actions at all.
Tbh that's to be expected, the work machine is the company's property and there shouldn't be any expectation of privacy.
I work at a tech firm in India, and we are encouraged to create skills.md based on the traits of our colleagues, with the intention of reducing key personnel risk. A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.
> Tbh that's to be expected, the work machine is the company's property and there shouldn't be any expectation of privacy.
> I work at a tech firm in India
First I wondered how can you have such a low expectation on privacy, then you answered my question. What you need in India is more unionization and fight against corruption. It is becoming worse here in Europe but in India you do not have the protections that we have. Without that you will have no rights.
You will have to fights to get rights at your job. In the same way that Europeans are going to have to fight to keep them.
I am a European in Europe and I expect the same. Why would I assume otherwise? The company laptop is full of spyware, starting from the OS. I have no reason to consider it "mine", and no desire to do so. If I want to do anything private (including things that my company would not like) I can do so from my private devices.
Europe is a big place, but in my area of Europe it is very illegal to monitor employees this way. If you were to be fired for something that illegal surveillance turned up, I would consider it a good thing - with the settlement money you could take a couple years of vacation.
> with the settlement money you could take a couple years of vacation.
In many EU countries even if privacy protection is strong on paper, the settlement will be so low compared to US that you won't afford to take any vacation.
I've never worked a software development job where I didn't have a company-provided machine that I installed Linux on. I installed the OS, I have root on the machine, I wiped it and returned it empty when I was leaving the job.
Lucky you, I guess. In all the companies I worked for I have had a company-provided Windows laptop where the OS was managed by IT. The degree of freedom (e.g. what software could I install, what websites were blocksd) varied.
Whether they should or shouldn't, you have to expect that your company has root on your work device or at least some sort of corporate admin profile that gives them access to everything on the device and all attached peripherals. This has been pretty standard at IT / tech companies for as long as I've been in the workforce. I personally wouldn't do anything personal on a work computer, from sending personal E-mails all the way up to storing nudes on it. Why do that when a separate personal computer is cheap and solves the problem entirely?
EDIT: I remember, an example of this actually came up a while ago on HN. An Apple employee had to return a device unwiped, due to legal discovery, but the device had intimate pictures on it[1]. Oops! Don't do that, people.
Ask your IT department what they're tracking and they'll tell you. And yet I assume you still continue to go to work or do not actively seek out non-surveiling companies. By "everybody," maybe iI should clarify that it’s "majority" instead.
That's fine but realize you are not representative of the average tech worker or indeed any white collar worker such as those we are talking about in this post.
You're right, maybe they should put cameras in there too. But there's a reason we don't yet every worker still explicitly or implicitly knows not to use their work computer for personal tasks, as people can and do get fired for doing so.
This is a ridiculous statement. Everyone I know at my company uses work laptops for personal stuff. It's not in the land of freedom though, so great leaders like yourself can't fire people at will.
TBH at this point I don't believe you are a real person.
I stopped doing any personal stuff on a work laptop long time ago, like 10+ years ago. There is absolutely nothing on my work laptop which is not work related. Working from home though helps, I always have my laptop next to me. Same with the phone, under no circumstances I will do anything work related on my personal phone (and yes I do have a company provided phone with MDM and etc).
Consider, do they ever go on explicit websites on that computer? No? Because they know that's surveiled while a personal computer for the same purpose is not. As I said, people do know the difference and might do light personal things like googling something unrelated to work but wouldn't do e.g. banking on a work computer. If they do, well, it'll be their fault if they ever get fired for doing so.
The fact that you don't believe people who don't share your same opinion on mixing work and personal stuff are somehow not "real" is part of the problem.
The semi-official policy of my employer in Denmark is you can watch porn on a work computer, so long as you're paying for it. (This reduces the risk of malware etc.)
I say semi-official because someone asked the question at a Q&A training thing with IT, and that was the IT manager's response.
> Limited private use of these tools is often permitted, generating a level of expectation by employees for privacy: employers should not routinely read employee' emails or check what they are looking at on the internet.
Most companies just don't have a reason to look through the computer they're letting you use to do your job. Don't give them a reason.
Maximizing shareholder value by observing you doing job in the pursuit of replacing you with a very small shell script is a great reason that they've just discovered.
Get your own laptop, pay for your own cellphone, use your own internet service, etc. If you create anything of value on their property or with their property or during times they're paying you in any capacity, expect them to use it for profit.
Exactly, no one is stopping one from using their personal devices for any personal purpose, and the fact that somehow people are defending wanting to do personal things on a work laptop is utterly baffling to me. Like another commenter said, I always grew up with the notion, legal and social, that a company laptop is absolutely not your property and companies can and will look through it. Use your own devices for your own tasks.
But the legal notion from where you grew up might not apply worldwide right? People aren't saying you are wrong, they are just saying things are different in other places.
Where I grew up you do have legal right and social expectation not to be under surveillance at work. You even have an expectation of privacy in public spaces - I know this is not the case in other countries, but I accept/know that and it would be senseless to imply this is expected everywhere.
> I mean I have my own laptop and phone, why would I use a work device for that stuff?
Because you're traveling for work, and carrying two separate laptops eats into your limited baggage size/weight. Things are marginally better now that everything uses the same standard charger, but not much.
I make it a point to use the office bathrooms only to excrete food I ate from the work cafeteria. Personal food I ate at home I excrete in my personal bathroom.
That sounds like a truly dystopian take to me, but suppose you're right and nobody should ever use their work computer for anything personal.
Per TFA, this thing is literally taking screenshots of what is on the employee's screen. At work my screen sometimes had things such as: performance data on other employees, my own PII from HR systems, PII from customers, password managers, etc. It's also logging keystrokes. How many times do you type passwords a day.
Collecting that kind of information on purpose is truly wild. Imagine the security safeguards you would need just to prevent it from leaking. Wait what, they're explicitly collecting it to train LLMs with it? God help us all.
Your screenshots go to your managers, not just anyone in the company. At Meta there are very strict safeguards for preventing employees e.g. stalking their exes, so I'd assume the same security is used for even PII filled images.
It might surprise you, but culturally, not all companies are this way. I know some are, but some are very different.
100% of the people at my company use their computer for personal tasks, and this is permissible under our policies. Our company is fully BYOD and owns zero computers, and zero cell phones.
Im pretty surprised you're getting so much flak for this. This is the least controversial opinion I've seen on HN. I've been working for ~30 years, and every job I've had, if you actually looked at the IT policies, they were all very clear that work devices were for work, personal devices were for personal stuff. It wouldn't even occur to me to cross the streams. Carrying a second phone for personal stuff is a trivial burden.
I'm also very surprised, so much so that one of my comments got flagged for it. Seems like it's a few dissenters while others have mentioned concurring with this fact as I also have always been under the impression that work hardware is for work only. And then some people are talking about how it's authoritarian or anti human, like, it's not that deep.
> every job I've had, if you actually looked at the IT policies, they were all very clear that work devices were for work, personal devices were for personal stuff
There's quite a difference between that and zero privacy, and there's also quite a difference between "IT policy says" or "the law permits" and "this is how things ought to be".
That said, between necessary endpoint security and the potential to get caught up in corporate legal disputes I feel like maintaining a strict separation is advisable. But that doesn't mean I support unnecessarily invasive surveillance or think it's a good thing.
You already do and your consent is part of your employment. Check your employee handbook, search for things like "data privacy" and understand how https://www.copyright.gov/circs/circ30.pdf applies in the modern world, especially around AI. TL;DR companies can do whatever they want with your work / observe you and you have no real meaningful recourse.
/facepalm If we're going to debate norms and ethics, sending one liners into cyberspace won't get far. There are better ways. Invest in your conversational skills and listening skills, please. Otherwise you are a moth and HN is a streetlamp.
Strong disagree (especially under US law). Consider what this means for union organizing in the context of this 2022 NLRB memo.
> Under settled Board law, numerous practices employers may engage in using new
surveillance and management technologies are already unlawful. In cases involving
employer observation of open protected concerted activity and public union activity like
picketing or handbilling, the Board has recognized that “pictorial recordkeeping tends to
create fear among employees of future reprisals.”10 The Board accordingly balances an
employer’s justification for surveillance “against the tendency of that conduct to interfere with employees’ right to engage in concerted activity.”11 In that context, “the Board has long held that absent proper justification, the photographing of employees engaged in protected concerted activities violates the Act because it has a tendency to intimidate.”12
> A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.
I know you’re in India, but in the US, could this not be considered intellectual property theft on “right of publicity”? Your persona and working style is one of your core values you bring to market; building a simulacrum of that is not something I expect to be part of the “your output is the company’s IP” in an existing contract.
I will give a company the right to try to reproduce my output. But my very likeness and modus operandi? No.
Here's how a refusal to them doing whatever they think would maximize shareholder value with any of your output or data they collect from your company computer would actually go down: the company would do something you didn't like, you'd try to complain about it, HR would listen and document everything. In the best-possible case, they'd let you personally opt out. More likely, since you're likely very easy to replace in their minds, they'd refer you to their data privacy clauses in their acceptable usage policy section of the employee handbook, maybe reference the notice sent out to everyone about how they're doing this, then fire you for performance reasons a few months later. You'd be given an NDA and a very average severance, then you could choose to try to hire a lawyer (who would take at least a third of any pre-tax settlement amount) and fight them, in which case they'd settle for more or less the same as the severance package (and keep in mind both that and any court settlement are both taxable income, so you're not getting a windfall in any case), or you'd just sign the NDA and take the severance with no admission of wrongdoing on their part and no legal recourse.
Large companies employ entire orgs of lawyers who specialize in these matters, and it is literally their job to protect the company, not the employees, from lawsuits like this. Is it fully legal and in the clear? Probably not. Will they still 100% get away with it and leave employees with no realistic options or upside attempting to fight it? Of course. Welcome to America, land of the free for corporations which are legally people, just ones with infinite lives who cannot be arrested / imprisoned but can make legal decisions but cannot be subpoenaed. See eg https://www.theverge.com/policy/886348/meta-glasses-ice-doxx... for how the C-suite thinks about this type of thing.
> Is it fully legal and in the clear? Probably not. Will they still 100% get away with it and leave employees with no realistic options or upside attempting to fight it? Of course.
I am aware of "how the C-Suite thinks about this type of thing", but this is also a good example to surface here of what to redline in future employment contracts. Yes, that will likely shut you out of a lot of places, but the opposite is beyond learned helplessness: it is capitulation to a future that will not end well for the tech worker.
Wait so the engineers doing novel work are ousted; you fire the engineer that had the skill set to produce the work in the first place? Surely this is creating a Stasi-like neighbour snitching environment with chilling effect where the better you do the faster you become a target for replacement by engineer's incentivized to win points by replacing you. Even being very charitable where the scenario is the code was so poor that the code the employee is working on is so entrenched in domain knowledge they've become a huge bus factor, an LLM is going to make that kind of code worse. I'm struggling to imagine the subset of people this replaces that is not a long term detriment to everyone working there. Those people became "key personnel" for a reason no?
Well, no, there should be an expectation of privacy; an employer shouldn't just be able to have a palantír for their employees.
>I work at a tech firm in India, and we are encouraged to create skills.md based on the traits of our colleagues, with the intention of reducing key personnel risk. A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.
Okay, now this sounds like satire. But I suppose that's the way the world is going.
No to disagree with you here because I wholly support this position. But I can see the problem from both angles. The problem, it seems to me, is that, and Im not sure which came first, employees started being reckless at work, probably because employers stopped caring about the treatment of their workers, which ramped up the viscous cycle to where we are now.
I can see an argument for companies not trusting there employee's because most employees harbor borderline corrupt thinking in their work place and have terrible work ethics, of course all of this is brought on by corporate culture so its there fault in the first place, but im not exactly sure what started where.
If "most" employees are corrupt and have terrible ethics, why is the company hiring them in the first place? I don't think I've ever worked anywhere I thought that a majority of my coworkers fit this description. This sounds pretty much identical to what the parent commentee said: it's a hiring problem. Either the company is bad at hiring people who don't have these traits or they're actively selecting for it.
Just speculating, but the intention wasn't reducing key personnel risk. It was so that your employer could fire them and replace them with an agent running off of their associated skills.md.
At the risk of sounding like an LLM, a laptop is not just "something you get at work", it's literally your work tool. If you were hired at Shit Producers Inc as a defecator, you'd damn bet they would surveil the bathroom stalls there.
It being protected has nothing to do with a presumption of privacy in corporate communications. At a minimum you should be aware that your work related communications are subject to discovery.
It amazes me that people seem to think that once they have clocked in for work they have entered some kind of dystopian dictatorship where all their rights are immediately forfeited. And that people are fundamentally not allowed to push back against this kind of bullshit.
What right is forfeited? The only reasonable assumption to make is that your boss can read everything. Regardless of if you think it is fair or not it is still the safest assumption.
> You’d no longer be able to disent, or discuss anything non-work related with even the slightest expectation of privacy.
When I joined the workforce a long time ago, I went in with the mindset that: Their property, their equipment, their right to monitor (or even keylog).
I was pleasantly surprised to find that not to be the case, but I've always believed in their right to do so.
Why do people expect to have a right to do non-work related stuff on the job? Every company I've worked for states in the employment contract/policies what you can and cannot do on the job. They never enforce it to the extent that they outline in the policies, but it's usually clear cut.
If you want to rant about the company, do it outside the company! Or at a physical water cooler. When coworkers want to rant to me about the company, they don't use Slack/Teams. They message my personal, non-work number.
While you have the right practical approach, I do believe companies should face harsh regulations preventing this kind of monitoring. It has almost universally negative effects, from enabling union-busting to exploitation to all kinds of discrimination and favoritism.
It's absolutely their right, but it's a dramatic cultural departure from the history of the company.
In the late 2010s/pre-covid it was very common for employees to port their personal cell phone number to their work phone and just not have a personal cell phone. The internal culture at the company was remarkably open for their size.
That all went away by the time I left in 2022, and from what I've heard it has only accelerated into an employee-hostile environment. I'm not shocked at this move.
I won't pretend to be a mind-reader of the executives involved. I was a line engineer, so effectively watching from the sidelines. It was temporally close to Sheryl Sandberg leaving her role as COO, but I have no insights into how much that was a factor, a reaction, or neither.
From my perspective a lot of it was downstream of over-hiring in the post-pandemic frenzy. It's hard to maintain that culture while doing large layoffs, and there's no incentive for them to do so beyond the longer term reality that many of their best employees have left and they're increasingly seen as a place to earn a top paycheck in between layoffs.
I did mean engineers in general (I work with and have great respect for mechanical engineers, for example, and my folks were in construction), but I don't it's necessarily self-aggrandizing, either. I've worked on chat software and know people who met using my software and got married and have kids. I've worked on software somewhere in the chain of publishing important ideas, or just to share a joke.
I don't mean to say that this software was the only means of doing either of these things, of course. But we do make tools that people use regularly when living their lives. Sometimes it's just about being reliable or not getting in the way. The modern equivalent of flintstones and sharing stories around the fire.
It's about taking your work seriously - the qualities of what we make matter - and feeling some sense of purpose. And knowing who you're doing it for. I don't think that's being self-important.
1. But they are not paying for your training which you are bringing to the company.
2. About ranting about company, it is difficult to organize. That's why unions existed, and that's why unions were allowed to meet in work hours.
> When I joined the workforce a long time ago, I went in with the mindset that: Their property, their equipment, their right to monitor (or even keylog).
Why do you renounce to your rights to privacy so easily? You are an employee not a slave, sometimes I have the feeling that Americans do not know the difference.
> If you want to rant about the company, do it outside the company!
You have a right to organize inside the company, and for that the most efficient easy way are the internal company communications. Communications with the purpose of unionizing should be private and the company accessing them should be punished, and if needed C level should go to prison for their crimes.
How do you organize otherwise? How do you contact your colleagues about grievances about the company?
It is mind blowing to see this capitulation on personal rights. It seems that corporate rights are more important than anything else in the USA. It is a pure dystopia.
There is no clean separation between personal and work. It is also more efficient to blend them (if I expect a baseline level of non-snoopiness on my work computer, I will text my boyfriend from my work laptop... obviously beneficial for the firm).
Either way when it comes to ranting about the company: many workplaces don't have a watercooler where all your team mates congregate (e.g. remote/different offices). Also what, you'll rant about confidential work projects over non-work texts?
>Why do people expect to have a right to do non-work related stuff on the job?
Like use the restroom? Personally, I'm not a slave. I am getting more and more used to the idea of having to push back on those who do exhibit such a mentality. Y'all are beginning to become a threat to the rest of us.
the fact that the employees have voluntarily consented to wearing the diapers means that wearing the diaper is better than any alternative available to them, which proves that forcing employees to wear diapers maximizes total social utility
This comments pairs really well with the song Sixteen Tons - I cued the song[1] and re-read your comment.
More substantively: I would like the employer/employee transaction to be one of buing/selling labor. To me, training AI on keystrokes nudges the deal towards selling one's "soul" next to other dystopian tropes like brain implants and work toilets that analyze excretions.
You are correct that employers own the laptops and can install anything they want, which is why I never do anything other than work there - the farthest I will go is participate in employer-hosted shitpost groups/channels, which are not anonymous, and they are free to train their models on that.
You come with a belief, then you wonder why other people don't have the belief. The belief was exogenous for you. Why do you believe the belief is not exogenous for others?
I guess you never talk to coworkers about your weekend. That's on the job. I see you mention the water cooler; how dare you talk there?
Companies pay their employees to build things. They do not pay their employees for their likeliness or the inner workings of their brains. Meta is trying to get the latter by keystroke tracking. It is an overreach in that context.
If they just want to monitor your computer for the purposes of productivity tracking, that is in their right, imo - just a shitty thing to do.
I don’t care if a company monitors which websites I go to on a work computer, what applications I run or what I say on Slack.
On the other hand I would be looking for another job if they had keyloggers or were taking screenshots even if they said anything about me shopping on Amazon or randomly browsing Hacker News or any website that wasn’t gaming or Netflix during work hours.
Heck I use to travel a lot more for business and I used my work laptop for Netflix and other streaming services in the hotel.
As long as I’m meeting performance standards it shouldn’t matter.
I really don't understand how this is legal. I guess Facebook maybe doesn't actually have any compliance requirements in the USA, but time series screenshots of any SRE's screen are going to contain data that should not be stored by some data vacuum. I know Meta has a reputation for shitty data handling practices and US regulations are light compared to Europe, but how are they planning on securing passwords, encryption keys, PII, etc. ? Can employees turn this off at their discretion? What happens if someone forgets to turn it off before they cat the companywide ssh root private key? Even setting aside legality, someone with access to this training data would have what sounds like an unacceptably broad level of access to company systems unless Facebook wants to get hacked.
This is legal for most businesses under US law, especially on company devices. And unfortunately not unheard of. Compliance with this data is typically handled in the same way you'd handle any data access situation -- by restricting access to the screencaps to a specific group of people.
Not that I support it -- but typically companies don't do this in spite of security concerns, they do it to address security concerns. But of course, what meta is doing sounds like a different situation. It sounds like they want to make a model that replaces part of their workforce.
I understand the security spyware, though I think it's somewhat questionable there. But this sounds like deliberately putting all of your most sensitive data in a blender and then inevitably letting anyone get a taste of the smoothie.
Just like you'd secure data on a normal internal production system, I'd presume one wouldn't simply let anyone get a taste of the smoothie. But who knows -- move fast and break things, I guess.
This data is going to get leaked in a breach. It will be used against you in a court of law. It will be used for training and (regardless of what anyone says) will be used to fire you once the AI can do your job.
And when all of the above happens Meta will be absolved of any responsibility.
I don't understand how it's legal either. I guess we need laws against it yesterday.
It doesn't have to get leaked. They can sell it and use it as another means to identify Internet users. Meta is pretty infamous for identifying, tracking, and understanding user behavior. We are kind of past the point where these companies care at all. If you think the push to add age verification to operating systems is an unrelated giggle I envy you. Something something Cambridge analytica.
They could perfect it in house and then roll it out as a product. The way people type and use a mouse are pretty identifying especially when coupled with other things.
I do agree screenshots themselves are less useful for that.
All psychological experiments that loosely relates to Web became default legal when A/B tests became normalized after Google started it. It is not something that may be covered by blanket waivers. It's something that require participation under free will and independent review boards and such. For every single one of those little tests.
The cat is out of the bag, but that doesn't mean it's a non-issue.
Yeah, this is crazy, remember when engineers were actually engineers and that meant something? Imagine asking to install spyware on your lawyers' firms' company laptops because you didn't trust them not to make some deal with the judge. Or demanding 24 hour monitoring on everything a doctor does because you need to review the footage at any time.
EDIT: While we are here, let's do this for politicians as well :), publicly available, auditable 24-hour surveillance.
Of the examples you listed, politicians are the only ones you directly fund and supposedly work for you. Your lawyers and doctors aren’t your employees, and they also don’t work on your property (though lawyers might handle your documents). The biggest thing this points to is that the mask is almost entirely off between employee-employer relationships in the US, and it looks like by ensuring everyone depended on employment for insurance before turning this corner, there’s not much resistance left.
This is why a worker's rights movement is important. You shouldn't have to rely on your employer's goodwill. Reasonable privacy rights on work equipment should be guaranteed by law, and any large company should have a Euro-style worker's council.
The legal environment is the only way to baseline behavior. In countries with strong worker's rights, you generally don't have to fight much to make use of them; it's the norm for management, too. Likewise, the US-style norm of having no expectations toward your employer and the "stay in your lane" type takes rampant in the thread are also symptoms of the environment and its norms.
.... I'm not the person you're asking but I can give curious anecdata on a home purchase....
When I bought my home, I had a purchase agreement that said 'I will pay up to 1500$ cash if the property assesses for less than X' (X being the amount I told the realtor I was willing to pay.)
And the property happened to assess EXACTLY for X.
Collusion in markets is nothing new, and even when we regulate people find ways around it.
It is very telling especially in light of the Palantir manifesto, that all of this technology is being applied against individuals instead of towards ensuring business compliance.
Hmmm. Property purchase agreements are rather different in your neck of the woods than mine!
Here (UK) we do have a bit of variety, thanks to devolution and bloody mindedness. I'm talking about English here (possibly Welsh too), rather than British (England + Wales + Scotland) or even UK (England + Wales + Northern Ireland). Wales is actually a bit more complicated than that but let's keep it simple.
Here (England), you advertise a house price and invite buyers. You generally engage one or more estate agents (realtors) I think it is called an "invitation to treat" in legal terms.
... negotiations ...
Once a price is "agreed", contracts are drawn up by both sides and "exchanged". When the exchanged contracts are both accepted, then the contract is binding on both sides. Basically: the Buyer will Buy and the Seller will Sell etc.
I think the US is fairly similar in that you do have to agree to something before it becomes a binding agreement.
Palantir builds these systems for the US government which is (hopefully) something you can hold accountable / can reasonably trust.
Meta builds these systems for itself to make digital cocaine and sell personal data to profit off everyone (including and moreso primarily the elderly and children). You can't hold them accountable, actually pretty much nobody can hold Zuckerberg accountable.
When Palantir helps USG spy on the planet the primary purpose is defeat enemies + protect assets. When Meta builds these systems the primary purpose is digital cocaine.
> When Palantir helps USG spy on the planet the primary purpose is defeat enemies + protect assets.
I think it takes about the same amount of suspended disbelief to say that, as it takes a Facebook employee to believe the primary purpose of targeted ads is to connect customers and businesses.
Does Palantir collect data or just analyze aggregated purchased data? I'm not familiar with the data collecting SDKs available as I don't whore out myself/my sites like that, so maybe there is a pipe directly from them????
Either way, I'd definitely hold those directly responsible for collecting and selling of the data way worse than those that just make use of a product. It's like the war on drugs where those making say they will make as long as there are people wanting to buy
> Does Palantir collect data or just analyze aggregated purchased data?
Neither. Palantir makes data management software, they've never been in the business of collecting or analysing data themselves at all. There's generally a fundamental misunderstanding online of what Palantir actually does.
Any time you see an article or comment saying something along the lines of "Palantir is stealing your data", consider if it makes sense when you replace Palantir with MySQL, if it doesn't then it's generally safe to assume that article is garbage.
There are plenty of legitimate reasons to have grievances with Palantir, but they're completely drowned out by nonsense.
Neither. Palantir makes data management software, they've never been in the business of collecting or analysing data themselves at all. There's generally a fundamental misunderstanding online of what Palantir actually does.
This is rather naive. Palantir makes politics by creating and funding a SuperPAC to discredit a former employee who happens to support the RAISE act.
Leading the Future, a super PAC whose funders include the founders of companies like Palantir and OpenAI, is spending millions of dollars this election cycle, and a considerable amount of that money is going toward attack ads against Alex Bores – even though Bores himself used to work for Palantir.
Those are legitimate grievances as mentioned, what they are not is Palantir themselves collecting massive amounts of data, which is often what they're portrayed as doing and what the GP asked about.
fun fact: they all made above $1-2 million, some even a lot more via meta stock. so after that they stopped doing that kind of thing. ethics can be bought it just have different price everyone.
Or stuck with HCOL that is the Bay Area. There’s not really any purely “ethical” companies in the Bay Area that pay enough for you to live there.
You’d be surprised how few people actually buy into the corporate culture at these companies. It’s just to get paid because everyone needs a job to pay their expenses.
You want to solve this then lower the cost of housing.
Medical device companies are run very differently from most technology development companies. They have to be because the stakes are high, evaluation criteria are different, and medical related marketing and sales have separate industry managed channels and venues.
Yeah, it is complete bullshit. Even if they don't do it straight away, once they have the spyware in place, it's only a matter before they do. It is Meta after all.
So, back in 2021, I supervised a student project where we aimed to simulate human interaction with the browser. Obviously, we needed data on human interaction. After discussion, we ruled out collecting data from a group because:
- the project was time constrained, so hardly any time, and
- there were serious ethical questions which could never be addressed well within the allotted time for this project
So we ended up discarding the idea of collecting data from a representative group, even before we got to the point of asking "how do you do handle that ethically". We ended up collecting data from 1 subject. The student in question, indeed. He handled the data from which he derived heuristics that simulated the data. The collected data therefore never left the student's hands.
<sarcasm>Silly us, we should have just not bothered and collected it from anyone and anywhere. Apparently.</sarcasm>
In all seriousness, this callous and complete disregard for ethical questions offends me so very much.
It will be interesting to see how the people who maintain (in my opinion) one of the worst offending organizations out there for invading your privacy - and generally treating you in a manner that lacks human decency - respond to having their privacy invaded, and being treated without basic decency.
I realize you can argue whatever is done at work should have no expectation of privacy, and I get that, but as an employer myself I've always felt that schemes like keyboard and mouse tracking are going a chasm too far. Your employees are human beings not robots. In the older context of corporate productivity tracking there are far better metrics available - starting with, I don't know, maybe talking to your employee and asking them how things are going.
I wouldn't have a problem if it were opt-in, but if this were foisted upon me I would surely quit.
I like to imagine they’ll mostly capture meta employees using AIs to do work.
Then they’ll deploy models trained on this, and begin capturing employees using AIs that are good at using AIs to do work.
Repeat a few times and they’ll start capturing the keystrokes from people mashing their heads into keyboards with dispair and exclaiming, “Why can’t these models do anything anymore!!”
I am to speculate that they are going to use this as an excuse to let people go without doing mass layoffs and having to pay severance. Training AI is just an excuse.
Many many moons ago I refused to implement a calendar event scraping system at Meta where it would look at all of your meetings on the calendar and do "analysis". IDK what ever happened to that task, I assume it died a death of no one else being willing to do it. This was probably 2011 or so, I can only imagine it has gotten so much worse.
It's pretty easy to scrape your own calendar events in Meta. I'm not sure about others' as I'm not a manager, but I wouldn't be surprised if it were visible as long as someone is in your report chain.
White collar firms with a reputation for paying well don’t cheap out on severance. It’s a cheap way to get employees to sign some stuff reducing the risk of lawsuits, plus their unemployment insurance premiums stay lower.
It’s only once the business is having a cash crunch or will no longer need to hire competitive candidates that they start letting people go without severance.
While it would be a hilarious failure mode to encounter, this is actually a good thing!
These models already have the skills that humans were using them for, so either by training the models to use subagents or simply inlining the work done by the AI, you have a much easier time training the model to perform tasks from a human-distribution. The humans have done the work of making the human-distribution look more like an AI distribution.
Not when all of the marketing of LLMs is touting their abilities to do the exact thing and that is what investors are being presented.
If it is as you say, then eventually the house of cards will crumble. Then we can finally go back to work and quit being inundated with needing to use AI for everything.
for the company that is one of the major players in tracking similar data across the web, i don't see much wrong with this.
if they continue to share their work through open releases despite the leadership change, i hope we get to benefit with their work.
not quite optimistic about the result as i wonder if on aggregate we all consistently interact with computers the most efficient way possible. maybe to beat captcha or scraper detection through mimicry perhaps.
I hope this is widely hacked. If these employees are any good, someone will whip up a countermeasure that feeds absurdly wild and nonsensical data into Meta's fetid, gaping maw.
We’ve been moving towards a more and more tyrannical company controlled society for a long time and now they’re straight up doing hacking tactics to train machines to take our jobs. Doesn’t get much more bleak than that.
Every day I grow more and more glad that I turned down a Meta offer. It was probably a hire-to-fire offer anyway, not based on any engineering prowess on my part. Still, I couldn't be more relieved I dodged that bullet.
I'm so happy that EU and UK have laws against this kind of thing and so I will still be able to work somewhere in the future(TBD what future means, though).
Yesterday I was doomscrolling computer vision related stuff on LinkedIn. I hate it, but often looking for freelancing ops in CV. A video appeared in my feed of some South Asian laborers sewing in a garment factory. All of them had cameras mounted on their heads. Otherwise, they looked exactly like you’d imagine beleaguered sweatshop workers would look. Exhausted, dull expressions looking into the camera as whoever was filming the video walked by.
The presentation of the video and all the comments were on awesome cool ego-centric video understanding research that’s going to totally obsolesce human labor. I couldn’t get over how grim the video was. Here are some people in one of the least desirable positions in the world, and that’s not enough. Now they must labor without a shred of dignity, knowing they’re training their own replacements and likely not a thing they can do about it.
I’ve struggled to find enough freelance work to stay busy recently, but more than that I’m starting to feel a moral crisis. It’s getting harder and harder for me to feel like what we’re collectively doing isn’t absolutely fucked.
It seems like every tech company is moving towards the sweatshop model pioneered by CrossOver/Trilogy, treating engineers as human CPUs at best, monitored 24/7.
For context, when the article says "a list of work-related apps and websites," this includes Google properties like gmail, docs, etc, and social media websites like Facebook and Instagram, with no provision for excluding personal accounts.
No one intelligent should be logging into their personal accounts on their work devices in any case - it's always been the case (at least in the US) that companies can do whatever invasive scanning they want on devices they own.
Meta does require you to have a Facebook account. The expectation is that it is your personal fb that you use regularly. However, it doesn’t need to be. You can create a new fb account with a new gmail account and that’s fine. That’s what I did and some others do as well.
That said, 90%+ of employees end up using their real personal account because the language they use makes it seem like you couldn’t do what I described.
Idk, do you think it's sensitive for the employer to train an AI with it and then put that AI on Instagram for everyone to use and ask for employee SSNs?
Yes, but, so what? It isn't a license to train AI on employee personal information.
That said -- social media websites were later removed from the "work-related" list. So there was at least some recognition it was overreach and did not match the stated justification.
Yeah automatically assume everything on your work computer is available for your employer to see. And everything you do on your own device when connected to their WiFi or VPN.
If anyone can fingerprint your personal device while literally inside the building, it is Facebook.
You don’t even need any to do something fancy in software. Could just be correlating mobile device presence with work laptop activity. Can triangulate physical location with a handful of Bluetooth or WiFi beacons.
unless you're in a jurisdiction that has anti-surveillance workplace laws, which if you don't should probably think about before Mark Zuckerberg gets the idea to monitor to your body temperature from below the waistline
Dog fighting is a type of blood sport that turns game and fighting dogs against each other in a physical fight, often to the death, for the purposes of gambling or entertainment to the spectators.[1] In rural areas, fights are often staged in barns or outdoor pits; in urban areas, fights are often staged in garages, basements, warehouses, alleyways, abandoned buildings, neighborhood playgrounds, or in the streets.[2][3] Dog fights usually last until one dog is declared a winner, which occurs when one dog fails to scratch, dies, or jumps out of the pit.[4] Sometimes dog fights end without declaring a winner; for instance, the dog's owner may call off the fight.[4]
For those saying that this is fine because company computers are company property...
This is like going to work in a drug-lab where everyone is required to strip naked to ensure no "product" can be smuggled out. It's a zero trust environment at first blush, with the added terror of it being used to replace you with AI.
People working naked in a drug lab have more job security than meta employees and an equivalent level of respect and trust from their employer. However, they can't unionize because they have no legal protections. Their employer could literally point a gun at them if they complained. That isn't the case for Meta employees. Just sayin'.
Growing up we learned about _Slaughterhouse 5_ and _Cat's Cradle_ by Kurt Vonnegut. But there's not enough discussion or awareness of _Player Piano_. Incredibly prescient. These kinds of dystopic headlines are exactly the kind of thing you'd see in the book.
Player Piano is a 1952 Sci-fi novel by Vonnegut which explores the social and economic impact of automation replacing labor. If I recall correctly (I read this 15+ years ago) it is told from the perspective of one of the last people with an actually useful job, a person who's job it is to fix the machines that automated away jobs.
Honestly, I doubt this data is as useful as they think.
Half my workday is me browsing random tabs while an AI agent does the actual work. They're going to train a model on alt-tabbing and scrolling HN/Twitter/Reddit.
They have nothing else to do. Someone needs to be able to justify their position by creating stupid changes like this to create a line item on their LinkedIn.
Meanwhile, nobody seems focused on capturing CEO’s data for AI training.
It is going to be funny in a few decades when zuck transfers his shares and voting rights and estate to the ai bot, and makes himself functionally immortal. Or at least a sort of commissioned renaissance painting version of himself, probably.
Imagine in 300 years we are still ruled by zuck, ellison, bezos, musk, thiel, et al, just in ai model form empowered by estates worth more than entire nations and legal protections designed to outlast heat death of the universe. Assuming there is still a "we" living on earth. Charitable assumption I guess.
You're just being fatuo- NO THIS ACTUALLY IS GOING TO HAPPEN. IT IS GOING TO HAPPEN. I have no doubt they're shameless enough to literally do this if they could get away with it
Do most people who work in AI companies realize that if this buildup of reasoning models succeeds at what every tech CEO is aiming for, all of them will be out of a job?
“ The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.”
As far as I understand, there is plenty of research there in disciplines raging from social studies through psychology to game theory and economics, as well as informal simulations, that strongly suggest that human interactions are positive to participants pretty much if and only if those interactions are repeated, which realistically only occurs if participants are circumstantially close already - same neighborhood, same job, family, friends, same school, etc.
One-off interactions are almost invariably toxic with at least one of the participants getting cheated, bullied, or otherwise harmed.
So the whole premise of connecting people unconditionally, including anonymously, automatically, and from opposite sides of the world is inherently broken and doomed to do a lot of damage.
So even Meta's self proclaimed mission is damaging to society if followed, what could possibly at that point be expected from what they actually do, given the combination of basic facts that the primary purpose of any business is to make money, Meta's specific notoriously evident disregard towards ethics, their position as an advertisement business and entertainment provider, being deep into enshitification and market saturation, and of course actual honest mistakes to boot.
“ That's why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do bring more communication in. The work we will likely have to do in China some day. All of it.”
This is how anthropic captured the code agent so fast. You need training data, users are giving it to you.
Being a terminal application, all interaction is trainable signal (unlike, say, cursor, which is an IDE and let users freely explore, edit the files, move the mouse. Model sees nothing of it, nothing to train upon).
So meta is doing the obvious, we want to train a computer use model, we need training data. Better to capture from employee than buying low quality data.
Ironically, I’d be surprised if this wasn’t already the case before? I recall vividly employment contracts with meta in 201X with a clear mention that employees were giving up any sense of privacy while using meta provided devices or entering meta’s premises…
There is a danger in this. Small companies with delusional people in them will see "how the big guys do it" and try to apply this kind of thing in their own little fart of a business, making our dev/engineer lives miserable.
Wasn't it a few months ago that some engineer leaked that XAI was building 'Human Emulators'. This is either Meta's attempt at the same or just a blatant lie to make sure their engineers aren't slacking off. I've heard the workload has more than doubled for those who weren't laid off which is the only reason I think it might not be a employee monitoring system as I don't think anyone there can afford to not work hard.
After all the layoffs, labeling people as underperformers while laying off, etc. can they stoop any lower? Why TF would anyone in their right mind would want to join this company?
I wonder if there's a market for a little usb fob that does nothing but meander the mouse cursor about the screen in a path that, upon proper rendering, would appear to be a ...
Also, why are the investors not suing the legs off of Zuck for the whole meta verse debacle? It is a scam and pure fraud. Also dumb name, sue for that too. Should have just renamed it meeme.
Taking dystopia aside, without a lot more context I don't quite get how the captured data will be particularly useful to train models for say software engineering. If someone can shed light - thanks!
> to improve the company's models in areas where they still struggle, like choosing from dropdown menus and using keyboard shortcuts
Seems like a strange approach in general. I'd have assumed you'd just have it use accessibility features to get at things, if there is no other interface.
Knowing how to make an accessible website is so rare that companies pay me money to do it for them. I wish it was good enough for people, much less companies, to rely on.
Even if no attempt at proper accessibility was made, it's still generally far easier to attempt to find an HTML (or other form of UI) element, than to attempt to scroll to the right spot and use visual inspection to find things.
Now that the early 10s dev worship era is officially over, all pretensions of "making the world a better place" and being nice have been dropped and devs shall remember what it feels like to be a replaceable cog that can be swapped the way we used to do with phone wallpapers.
for agent agents we have ACP [0] surely their time would be better spent builing this sort of abstraction for computer use then simple teaching an AI to use a mouse?
The computer UI is the way it is because that is optimal for humans, if your plan is to replace humans why not just replace the whole stack os and all to something these models already know how to use?
Culture is often set top down. Look at the current US administration for a public example. People at the top will choose people who agree with them or who are sycophants. Top execs also chose this job and zuck because they have no moral issues with what the company does... Often if you closely associate with someone creepy or immoral it's because you care more about money and power.
That's really only limited to political appointees as far as the US government is concerned. Career civil servants hang around for a long time while their bosses change every 4 to 8 years.
In my experience, a LOT of company culture trickles down from the top. Some of this is by design e.g. CEO consciously and publicly rewards certain traits/behaviors. Some of this is accidental in the sense that CEOs, like many humans, have both stated and expressed preferences.
There is also this effect:
- CEO says "the lights are a bit dim in here"
- that turns into "We need to change all of the lightbulbs in here immediately!"
(this is especially true in firms where the CEO cares a lot about being proactive).
2. In Michael Crichton's book Disclosure there is a great line: "Why did you dress casually instead of wearing a suit? Is it b/c you wanted to do that or b/c the CEO did it and you wanted to show you were part of the team??"
There are lots of leaked emails showing Zuck is creepy. Recent one I saw where he is directly in the conversations about targeting teens/children. There's a twitter account [1] that posts emails from tech execs that have come out in legal proceedings - it shows the people at the top are very much informed and driving what happens in their companies.
Thank that 'Super'AILab supervisor from ScaleAI, Alexander Wang; this guy is really hilarious. He directly turned Meta into a Chinese company (just like how ScaleAI exploits its employees), and so far, I haven't seen him deliver anything that matches his annual salary. Considering that what he does is AI infrastructure, even cheap-to-the-point-of-ridiculous cheap labor for training data annotation. I don't think he's suitable for this kind of big-picture AI research.
I suspect most employees know better, but Meta pays very well and they just want to maximize their salary and their tenure in the company. Also it seems Zuckerberg has became more creepy lately, very much in phase with the current Zeigweist.
Please tell me more. I'm looking at the VIVA and I really don't get why would anybody contribute to the "internal linkedin" and other features. Where did it come from? Where does it go?
Seems like Skan AI's solution. They have a few Fortune 500 companies as clients doing exactly the same thing as Meta - capturing keyboard and mouse clicks to ultimately do next level process automation.
If you then think of crazy companies such as Palantir, something really has to be done about those entities. As a first step I suggest disbanding those companies, for many reasons, including wrong ethics.
And here I am rejecting projects because I refuse to install on my computer closed sourced Chinese VPN my client is requiring, though I told them I could just use built-in Windows VPN or open source Hiddify.
Btw do they at least pay them extra for this spying or is it supposed to be for free? I mean if they paid at least 30-50% on top of the salary maybe I wouldn't mind doing it on dedicated meta computer.
I can't imagine a more useless dataset to collect, proving that Meta might have reached the peak of the graph of (reach/grasp)/time and the numerator is about to plummet spectacularly.
Eventually every word spoken as well, which is already the case for most meetings, but not yet for individual interactions. Every bit of information at companies will be accessible to AI. This will allow automation all the way up to the C suite.
Probably aren’t seeing the promised productivity improvements of AI in terms of shipping production code and not just “super demos” that aren’t robust. So they want to see if the withers are really putting in the time or if the models struggle past a level of complexity that stalls or reverses early gains.
From company metrics I have found that developers who make a lot of mouse movements correlate with weaker performance reviews. Something to think about.
When you will think about it, what actually useful data are you getting from this exercise? It is like strapping camera on a manual laborer so you can see what he sees, but you don't get data about the touch and grip and you won't get data about why he is doing specific moves.
‘Meta spokesperson Andy Stone said the data collected would not be used for performance assessments or any other purpose’
Horseshit.
1. Employees are being asked to train AI to replace them.
2. Performance assessments will 100% be impacted. No question.
Thinking back on the OTT interview experience that Facebook helped pioneer, imagine making it through that, getting paid a massive sum of money BUT barely getting by on it because of the location, then they drop this crap on you?
> Meta (META.O), opens new tab is installing new tracking software on U.S.-based employees’ computers to capture mouse movements, clicks and keystrokes for use in training its artificial-intelligence models, part of a broad initiative to build AI agents that can perform work tasks autonomously, the company told staffers in internal memos seen by Reuters.
> The tool will run on a list of work-related apps and websites and will also take occasional snapshots of the content on employees’ screens for context, according to one memo, posted by a staff AI research scientist on Tuesday in a dedicated internal channel for the company's model-building Meta SuperIntelligence Labs team.
US mouse movements are obviously very vigilant, some people say they're the strongest mouse movements ever seen.
Since this is a serious website: I'd be genuinely curious how mouse velocity and trajectories differ between cultural and environmental settings
(apart from hardware, that's boring and should be normalized).
There was a time when studies made headlines that were exactly about the relationship between mouse movement, typing etc, and psychiatric disorders as well as physical health.
Obviously, both are related.
If you ask me, Ad tech would probably be able to tell your denominated faith using this data, when there's enough of it...
Genuine question: would right to left language based interfaces have different type of movements and thus training data than left to right language ones?
I'm sure detailed mouse and keystroke data can actually leak health data from subjects. What are the odds one can detect early parkinson disease from mouse wiggle data? If such data leaks away health status, I think capture should be forbidden under current rules.
I’m just saying that they’ve been collecting this info for years. Keyloggers, etc. are on all the computers you’re given. Employees didn’t have any expectation of privacy - just a hope. Now, it’s clear it’s completely gone and so the hope and goodwill is gone.
It's an employee survey so it's not resistant to claims that the number is higher than people know. But I think saying "on all the computers you're given" is an exaggeration at best.
I did think it was interesting that "One in three [employees] have had activity from their employer’s online surveillance used in their performance reviews."
Sounds like if you're being surveilled by your employer there is a good chance you know about it.
I've never experienced anything like that, so it's sort of a window into another world from my perspective.
But this is a good thing. Let me explain.
Imagine a society where an individual’s rights are prioritised and where society is dedicated to the best interests of each citizen (not desires or wants but reasonable considered best interests)
Now imagine a society where your individual daily actions are recorded, reviewed and helpfully advised upon.
Millions of people making millions of actions each day and all recorded compared and sifted for positive feedback and improvement overall.
Just how far ahead would such a society pull compared to one that stays at today’s level. Compared to one that used totalitarian methods enabled by such surveillance?
The difference between Soviet and Western Europe was not the tech, it was the trust.
If we can build a society with f trust then this tech will turbo charge us.
Yes they could have accessed logs before but there’s a difference between directed checking after incidents and active surveillance at scale.
Companies have shown us that IP going to AI providers is acceptable. Once you cross that line your thought workers are assets not people.
[citation needed]
https://www.lawinsider.com/clause/right-to-use-employees-nam...
Though you have to label it as personal (like creating a « Personal » folder or label and your employer can still access it in case of suspicion but he must do it in your physical presence and accompanied with a witness, generally a representative of the employees.
So you theoretically don’t have full privacy on this computer but you can’t be sanctioned for this usage.
Most companies I've worked at have a policy of some "reasonable personal use" being permitted. The concern is usually focused on the other way around: Companies do not want their IP on your personal machines.
They can certainly look at whatever is on their own machines, however, regardless if it is your personal data or not.
One large caveat: If you do any work on your company's equipment, they may possibly own it, no matter how relevant it is to the company. It's one of the legal tests used to judge the ownership of your work.
That was a few years back, dunno if that was fixed.
Enjoy your red tape frogs. "Live to work" anglo protestant work ethic followers will complete the necessary economic destruction of rude "work to live" cheese eating surrender monkeys.
This is our payback for Charles de Gaulle, Foucault, and Jacques Lacan (it's hard to rank these three based on damage done to western society)
It's not like people have an unlimited number of places to work, even if they have Meta on their resume. Many of my colleagues (and myself included) had struggled in the job market in the past before landing at Meta. If it's work for Meta, or suffer more tumult in the hiring market; it's easy to understand why many might decide to take the offer even with the moral implications. I used to bring up politics in the office with coworkers and many people are simply unaware of the consequences of the company's products. There are a few different categories that these people fall into, but the main ones I saw in the office:
1) Chinese H1B holders who are happy to be working in the US at all, and generally apolitical (or view anything as better than the status quo of where they come from)
2) Just normal people who are interested in their own lives and have never been trained to think about the world in a big picture way (some overlap between 1&2 exist of course)
It's very western of us to always be tracking the conseqentiality of our actions even when we're just the cog in a wheel at BigCo. I think that it's the right thing to do, but this sort of reasoning largely absent in eastern cultures, or even for some in the west—even among those who are well educated. It's kind of hard to blame individuals when they either are rightfully consumed by worrying about their own welfare or are for whatever reason not as seminally hyperaware or woke as we can be in the west. Growing up I liked imposing my political philosophies onto everyone; maturity is understanding that even objectively righteous values are only useful for the right types of minds.
On the contrary, once someone has truly been made aware of the ramifications of their actions; it's more difficult for me to extend my sympathy to them. I consider mark and priscilla to be fully implicated based on their exposure to the harm that they're actively, willingly, knowingly causing. Other employees may never get that memo, though, people obviously avoid political talk in the workplace.
You can't have solidarity about a bad thing with the people who are doing the bad thing! They have to stop doing the bad thing first! That's how solidarity works!
Don't put some mystery where simple greed is perfect enough explanation and there is little worry about others, some could use the word 'selfish' too. US society at large seems to me structured that way - there is no social net for the unlucky, healthcare also varies a lot based on disposable cash/job, good education is only for rich.
Yes. Which includes quitting, en masse, from any company that does this.
Meta ought to find it impossible to employ anyone with a policy like this.
Apparently, money wins over principles for 99% of us. How is this different and how are we better than Meta employees?
And employers know this, so they are enacting all kinds of draconian policies because they know employees know that they can't just leave the job and also keep their families fed.
It was metaapes, iirc.
https://www.reuters.com/investigations/meta-is-earning-fortu...
This is just another factor they’ll have to grapple with in their analysis.
I’m sure some of them will find it a bridge too far but not enough to really matter. The work will continue as will the expansion of Meta and the negative externalities that it produces.
When I worked at a startup that had some internal conflict between the software engineers and management, someone made a Signal group to chat about the issues among the software engineers privately and everyone joined that group with their own Signal accounts, without any kind of issue.
So they can monitor your email and slack server-side, but not your client-side stuff that doesn't touch their servers. However if you use a VPN then they can also monitor your DNS requests and every website you visit. Any kind of client-side telemetry is limited to a few things, however those things can involve what applications you have installed (like spotify) for security reasons or USB sticks plugged in.
It had no impact of recruiters trying to win me back since then.
His eyes glaze over and he just reads that instead in his corner vision instead of listening to you, and you get snubbed forever more
How can they legally mandate an exit interview when you resigned? Is it part of the employment contract? What would have happened if you showed them the finger and not participated?
https://www.businessinsider.com/how-block-lists-affect-your-...
https://medium.com/@ossiana.tepfenhart/the-no-hire-list-is-r...
https://www.theguardian.com/technology/2018/mar/16/silicon-v...
I'd be more concerned about industry-wide blacklisting.
But I also had a different situation where we also decided to hire someone, only to find out that we can't because he's been let go from another company owned by our parent company, and his severance agreement said he can't work for the same group of companies for 12 months. I think he was genuinely unaware that we're part of the same group(if was a huge corporation) and it just never came up in any conversation until HR tried to put together paperwork for him.
In the USA this is mostly theoretical since HR could immediately fire the employee due to at-will employment.
But in Canada, it's a much bigger issue due to labour protections.
e.g. Many managers at American multinationals gave assurances over email to employees about work-from-home arrangements. Then the company does a huge RTO push.
When the employee refuses, HR discovers they can't fire the employee without a hefty buyout.
Best not to give assurances if you're managing a multinational team.
Is that an American thing? I've been a manager for years and never heard of that happening. I didn't even know how much the people I managed were paid.
https://mathewsdinsdale.com/employers-advisor-march-2025/#:~...
One must be a fool to do any of this on any company-owned hardware. Facebook or no Facebook.
Most cultures around the world are acutely aware that the actions and opinions of their leaders are not a reflection of behaviors and opinions of regular citizen.
If keystrokes are captured, isn't this a double-edged sword where maybe the company might be inadvertently collecting evidence against itself if there's an investigation and the investigators want to collect keystrokes?
I work at a tech firm in India, and we are encouraged to create skills.md based on the traits of our colleagues, with the intention of reducing key personnel risk. A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.
I wonder if this is where they are going.
Feel like I'm reading a Gibson novel here.
> I work at a tech firm in India
First I wondered how can you have such a low expectation on privacy, then you answered my question. What you need in India is more unionization and fight against corruption. It is becoming worse here in Europe but in India you do not have the protections that we have. Without that you will have no rights.
You will have to fights to get rights at your job. In the same way that Europeans are going to have to fight to keep them.
In many EU countries even if privacy protection is strong on paper, the settlement will be so low compared to US that you won't afford to take any vacation.
EDIT: I remember, an example of this actually came up a while ago on HN. An Apple employee had to return a device unwiped, due to legal discovery, but the device had intimate pictures on it[1]. Oops! Don't do that, people.
1: https://news.ycombinator.com/item?id=28241917
proof?
> Turns out people actually don't really care about privacy at work
lol, won't ask for proof, because it's trivially falsifiable
(yes that's a real story from my career, and the company was 100+ employees at the time)
TBH at this point I don't believe you are a real person.
The fact that you don't believe people who don't share your same opinion on mixing work and personal stuff are somehow not "real" is part of the problem.
I say semi-official because someone asked the question at a Q&A training thing with IT, and that was the IT manager's response.
You can see the EU's guide here: https://www.edps.europa.eu/data-protection/data-protection/r...
> Limited private use of these tools is often permitted, generating a level of expectation by employees for privacy: employers should not routinely read employee' emails or check what they are looking at on the internet.
Maximizing shareholder value by observing you doing job in the pursuit of replacing you with a very small shell script is a great reason that they've just discovered.
Get your own laptop, pay for your own cellphone, use your own internet service, etc. If you create anything of value on their property or with their property or during times they're paying you in any capacity, expect them to use it for profit.
Where I grew up you do have legal right and social expectation not to be under surveillance at work. You even have an expectation of privacy in public spaces - I know this is not the case in other countries, but I accept/know that and it would be senseless to imply this is expected everywhere.
I mean I have my own laptop and phone, why would I use a work device for that stuff?
Because you're traveling for work, and carrying two separate laptops eats into your limited baggage size/weight. Things are marginally better now that everything uses the same standard charger, but not much.
Per TFA, this thing is literally taking screenshots of what is on the employee's screen. At work my screen sometimes had things such as: performance data on other employees, my own PII from HR systems, PII from customers, password managers, etc. It's also logging keystrokes. How many times do you type passwords a day.
Collecting that kind of information on purpose is truly wild. Imagine the security safeguards you would need just to prevent it from leaking. Wait what, they're explicitly collecting it to train LLMs with it? God help us all.
The ones on the ‘inside’ are doing to 500% of the time I’m sure
100% of the people at my company use their computer for personal tasks, and this is permissible under our policies. Our company is fully BYOD and owns zero computers, and zero cell phones.
There's quite a difference between that and zero privacy, and there's also quite a difference between "IT policy says" or "the law permits" and "this is how things ought to be".
That said, between necessary endpoint security and the potential to get caught up in corporate legal disputes I feel like maintaining a strict separation is advisable. But that doesn't mean I support unnecessarily invasive surveillance or think it's a good thing.
A bogus argument, methinks. Consider that the company also owns the phones, but can or do they listen to every phone call ?
https://www.fbi.gov/video-repository/think-before-you-post-p...
https://www.aclu.org/news/free-speech/fbi-can-neither-confir...
https://www.democracynow.org/2025/10/2/headlines/trump_direc...
https://www.levernews.com/are-you-on-the-fbis-new-watch-list...
https://www.latimes.com/politics/story/2025-12-11/justice-de...
> Under settled Board law, numerous practices employers may engage in using new surveillance and management technologies are already unlawful. In cases involving employer observation of open protected concerted activity and public union activity like picketing or handbilling, the Board has recognized that “pictorial recordkeeping tends to create fear among employees of future reprisals.”10 The Board accordingly balances an employer’s justification for surveillance “against the tendency of that conduct to interfere with employees’ right to engage in concerted activity.”11 In that context, “the Board has long held that absent proper justification, the photographing of employees engaged in protected concerted activities violates the Act because it has a tendency to intimidate.”12
https://www.nlrb.gov/news-outreach/news-story/nlrb-general-c...
I know you’re in India, but in the US, could this not be considered intellectual property theft on “right of publicity”? Your persona and working style is one of your core values you bring to market; building a simulacrum of that is not something I expect to be part of the “your output is the company’s IP” in an existing contract.
I will give a company the right to try to reproduce my output. But my very likeness and modus operandi? No.
You don't need to "give" them anything -- they already have everything they need due to basically anything you do, especially at work, especially while using company equipment, being legally considered "works made for hire" https://www.copyright.gov/title17/92chap1.html + https://www.copyright.gov/circs/circ30.pdf
Here's how a refusal to them doing whatever they think would maximize shareholder value with any of your output or data they collect from your company computer would actually go down: the company would do something you didn't like, you'd try to complain about it, HR would listen and document everything. In the best-possible case, they'd let you personally opt out. More likely, since you're likely very easy to replace in their minds, they'd refer you to their data privacy clauses in their acceptable usage policy section of the employee handbook, maybe reference the notice sent out to everyone about how they're doing this, then fire you for performance reasons a few months later. You'd be given an NDA and a very average severance, then you could choose to try to hire a lawyer (who would take at least a third of any pre-tax settlement amount) and fight them, in which case they'd settle for more or less the same as the severance package (and keep in mind both that and any court settlement are both taxable income, so you're not getting a windfall in any case), or you'd just sign the NDA and take the severance with no admission of wrongdoing on their part and no legal recourse.
Large companies employ entire orgs of lawyers who specialize in these matters, and it is literally their job to protect the company, not the employees, from lawsuits like this. Is it fully legal and in the clear? Probably not. Will they still 100% get away with it and leave employees with no realistic options or upside attempting to fight it? Of course. Welcome to America, land of the free for corporations which are legally people, just ones with infinite lives who cannot be arrested / imprisoned but can make legal decisions but cannot be subpoenaed. See eg https://www.theverge.com/policy/886348/meta-glasses-ice-doxx... for how the C-suite thinks about this type of thing.
Follow eg https://www.aclu.org/press-releases/aclu-and-75-organization... to see what actually happens.
More on how "work for hire" applies in a legal sense:
https://www.brookskushman.com/insights/innovations-at-work-w...
https://outsidegc.com/blog/common-misconceptions-about-the-w...
https://www.law.cornell.edu/wex/work_made_for_hire
https://crownllp.com/blog/what-is-a-work-for-hire/
I am aware of "how the C-Suite thinks about this type of thing", but this is also a good example to surface here of what to redline in future employment contracts. Yes, that will likely shut you out of a lot of places, but the opposite is beyond learned helplessness: it is capitulation to a future that will not end well for the tech worker.
>I work at a tech firm in India, and we are encouraged to create skills.md based on the traits of our colleagues, with the intention of reducing key personnel risk. A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.
Okay, now this sounds like satire. But I suppose that's the way the world is going.
There remains a thing called human dignity.
If a company can't trust the people it hires, that's a fault in the hiring process, not the employees.
I can see an argument for companies not trusting there employee's because most employees harbor borderline corrupt thinking in their work place and have terrible work ethics, of course all of this is brought on by corporate culture so its there fault in the first place, but im not exactly sure what started where.
Like that "Scott is an asswipe who never agrees to any idea that isn't his" or what?
This is exactly what they're doing, and they aren't the only ones.
"this computer is property of WORK CORP, you have no expectation of private on this computer"
If you want privacy use a personal device....
Maybe because they're aware that complaining about the boss is protected by law (in the United States and many other countries).
When I joined the workforce a long time ago, I went in with the mindset that: Their property, their equipment, their right to monitor (or even keylog).
I was pleasantly surprised to find that not to be the case, but I've always believed in their right to do so.
Why do people expect to have a right to do non-work related stuff on the job? Every company I've worked for states in the employment contract/policies what you can and cannot do on the job. They never enforce it to the extent that they outline in the policies, but it's usually clear cut.
If you want to rant about the company, do it outside the company! Or at a physical water cooler. When coworkers want to rant to me about the company, they don't use Slack/Teams. They message my personal, non-work number.
In the late 2010s/pre-covid it was very common for employees to port their personal cell phone number to their work phone and just not have a personal cell phone. The internal culture at the company was remarkably open for their size.
That all went away by the time I left in 2022, and from what I've heard it has only accelerated into an employee-hostile environment. I'm not shocked at this move.
From my perspective a lot of it was downstream of over-hiring in the post-pandemic frenzy. It's hard to maintain that culture while doing large layoffs, and there's no incentive for them to do so beyond the longer term reality that many of their best employees have left and they're increasingly seen as a place to earn a top paycheck in between layoffs.
If humans are the point, this also goes for keeping work environments humane.
*this is just an observation, not a normative claim
That's a bit self-aggrandizing - especially for Software engineers.
I don't mean to say that this software was the only means of doing either of these things, of course. But we do make tools that people use regularly when living their lives. Sometimes it's just about being reliable or not getting in the way. The modern equivalent of flintstones and sharing stories around the fire.
It's about taking your work seriously - the qualities of what we make matter - and feeling some sense of purpose. And knowing who you're doing it for. I don't think that's being self-important.
Why do you renounce to your rights to privacy so easily? You are an employee not a slave, sometimes I have the feeling that Americans do not know the difference.
> If you want to rant about the company, do it outside the company!
You have a right to organize inside the company, and for that the most efficient easy way are the internal company communications. Communications with the purpose of unionizing should be private and the company accessing them should be punished, and if needed C level should go to prison for their crimes.
How do you organize otherwise? How do you contact your colleagues about grievances about the company?
It is mind blowing to see this capitulation on personal rights. It seems that corporate rights are more important than anything else in the USA. It is a pure dystopia.
Governments, corporations and any other organizations should all exist FOR the people, not the other way around.
American-style capitalism truly is a disease.
If that's something he cant handle he might have a problem with personal accountability.
Either way when it comes to ranting about the company: many workplaces don't have a watercooler where all your team mates congregate (e.g. remote/different offices). Also what, you'll rant about confidential work projects over non-work texts?
Like use the restroom? Personally, I'm not a slave. I am getting more and more used to the idea of having to push back on those who do exhibit such a mentality. Y'all are beginning to become a threat to the rest of us.
More substantively: I would like the employer/employee transaction to be one of buing/selling labor. To me, training AI on keystrokes nudges the deal towards selling one's "soul" next to other dystopian tropes like brain implants and work toilets that analyze excretions.
You are correct that employers own the laptops and can install anything they want, which is why I never do anything other than work there - the farthest I will go is participate in employer-hosted shitpost groups/channels, which are not anonymous, and they are free to train their models on that.
1. https://www.youtube.com/watch?v=S1980WfKC0o
I guess you never talk to coworkers about your weekend. That's on the job. I see you mention the water cooler; how dare you talk there?
If they just want to monitor your computer for the purposes of productivity tracking, that is in their right, imo - just a shitty thing to do.
On the other hand I would be looking for another job if they had keyloggers or were taking screenshots even if they said anything about me shopping on Amazon or randomly browsing Hacker News or any website that wasn’t gaming or Netflix during work hours.
Heck I use to travel a lot more for business and I used my work laptop for Netflix and other streaming services in the hotel.
As long as I’m meeting performance standards it shouldn’t matter.
Not that I support it -- but typically companies don't do this in spite of security concerns, they do it to address security concerns. But of course, what meta is doing sounds like a different situation. It sounds like they want to make a model that replaces part of their workforce.
And when all of the above happens Meta will be absolved of any responsibility.
I don't understand how it's legal either. I guess we need laws against it yesterday.
Meta already has literally have billions of people's personal profiles and browsing history.
I don't think screenshots of their SWE's IDEs is going to be useful for identifying internet users.
I do agree screenshots themselves are less useful for that.
1. Why use their employee's data to fingerprint input? They could do that to a billion+ of their users instead.
2. Input fingerprinting is multi-decades old science, there are already production products that do this.
The cat is out of the bag, but that doesn't mean it's a non-issue.
EDIT: While we are here, let's do this for politicians as well :), publicly available, auditable 24-hour surveillance.
Politicians will be the first to carve out exceptions for themselves for reasons of "security" while everyone else is surveilled.
Yes, it should literally be the opposite -- with power should come accountability. But that's not how these things work in practice.
Well good thing we can just not vote for anyone and/or remove anyone who tries to take this stance. It's not like they are appointed by God.
The legal environment is the only way to baseline behavior. In countries with strong worker's rights, you generally don't have to fight much to make use of them; it's the norm for management, too. Likewise, the US-style norm of having no expectations toward your employer and the "stay in your lane" type takes rampant in the thread are also symptoms of the environment and its norms.
This sounds unironically a good idea.
Extremely common with divorce attorneys - and labor law.
Good luck getting a lawyer to sue another lawyer either.
When I bought my home, I had a purchase agreement that said 'I will pay up to 1500$ cash if the property assesses for less than X' (X being the amount I told the realtor I was willing to pay.)
And the property happened to assess EXACTLY for X.
Collusion in markets is nothing new, and even when we regulate people find ways around it.
It is very telling especially in light of the Palantir manifesto, that all of this technology is being applied against individuals instead of towards ensuring business compliance.
Here (UK) we do have a bit of variety, thanks to devolution and bloody mindedness. I'm talking about English here (possibly Welsh too), rather than British (England + Wales + Scotland) or even UK (England + Wales + Northern Ireland). Wales is actually a bit more complicated than that but let's keep it simple.
Here (England), you advertise a house price and invite buyers. You generally engage one or more estate agents (realtors) I think it is called an "invitation to treat" in legal terms.
... negotiations ...
Once a price is "agreed", contracts are drawn up by both sides and "exchanged". When the exchanged contracts are both accepted, then the contract is binding on both sides. Basically: the Buyer will Buy and the Seller will Sell etc.
I think the US is fairly similar in that you do have to agree to something before it becomes a binding agreement.
The goal is to manufacture a lack of empathy along the lines of: "why should I treat this person better than I was treated".
And you expect Meta employees, of all people, to believe this?
Palantir builds these systems for the US government which is (hopefully) something you can hold accountable / can reasonably trust.
Meta builds these systems for itself to make digital cocaine and sell personal data to profit off everyone (including and moreso primarily the elderly and children). You can't hold them accountable, actually pretty much nobody can hold Zuckerberg accountable.
When Palantir helps USG spy on the planet the primary purpose is defeat enemies + protect assets. When Meta builds these systems the primary purpose is digital cocaine.
I think it takes about the same amount of suspended disbelief to say that, as it takes a Facebook employee to believe the primary purpose of targeted ads is to connect customers and businesses.
Either way, I'd definitely hold those directly responsible for collecting and selling of the data way worse than those that just make use of a product. It's like the war on drugs where those making say they will make as long as there are people wanting to buy
Neither. Palantir makes data management software, they've never been in the business of collecting or analysing data themselves at all. There's generally a fundamental misunderstanding online of what Palantir actually does.
Any time you see an article or comment saying something along the lines of "Palantir is stealing your data", consider if it makes sense when you replace Palantir with MySQL, if it doesn't then it's generally safe to assume that article is garbage.
There are plenty of legitimate reasons to have grievances with Palantir, but they're completely drowned out by nonsense.
This is rather naive. Palantir makes politics by creating and funding a SuperPAC to discredit a former employee who happens to support the RAISE act.
Leading the Future, a super PAC whose funders include the founders of companies like Palantir and OpenAI, is spending millions of dollars this election cycle, and a considerable amount of that money is going toward attack ads against Alex Bores – even though Bores himself used to work for Palantir.
https://youtu.be/znKb71kLG5c?si=5Q9B88bXaGCkgebN
They even have a political manifesto, a thing that a private company dedicated to data analytics, should definitely not have:
https://gizmodo.com/alex-karps-supervillain-manifesto-is-put...
You’d be surprised how few people actually buy into the corporate culture at these companies. It’s just to get paid because everyone needs a job to pay their expenses.
You want to solve this then lower the cost of housing.
- the project was time constrained, so hardly any time, and
- there were serious ethical questions which could never be addressed well within the allotted time for this project
So we ended up discarding the idea of collecting data from a representative group, even before we got to the point of asking "how do you do handle that ethically". We ended up collecting data from 1 subject. The student in question, indeed. He handled the data from which he derived heuristics that simulated the data. The collected data therefore never left the student's hands.
<sarcasm>Silly us, we should have just not bothered and collected it from anyone and anywhere. Apparently.</sarcasm>
In all seriousness, this callous and complete disregard for ethical questions offends me so very much.
I realize you can argue whatever is done at work should have no expectation of privacy, and I get that, but as an employer myself I've always felt that schemes like keyboard and mouse tracking are going a chasm too far. Your employees are human beings not robots. In the older context of corporate productivity tracking there are far better metrics available - starting with, I don't know, maybe talking to your employee and asking them how things are going.
I wouldn't have a problem if it were opt-in, but if this were foisted upon me I would surely quit.
Then they’ll deploy models trained on this, and begin capturing employees using AIs that are good at using AIs to do work.
Repeat a few times and they’ll start capturing the keystrokes from people mashing their heads into keyboards with dispair and exclaiming, “Why can’t these models do anything anymore!!”
(I work at Meta)
It’s only once the business is having a cash crunch or will no longer need to hire competitive candidates that they start letting people go without severance.
Tell that to Elon Musk and Twitter employees.
This will also give them data on which employees aren't using AI enough, and then they'll be PIP'd or let go.
These models already have the skills that humans were using them for, so either by training the models to use subagents or simply inlining the work done by the AI, you have a much easier time training the model to perform tasks from a human-distribution. The humans have done the work of making the human-distribution look more like an AI distribution.
If it is as you say, then eventually the house of cards will crumble. Then we can finally go back to work and quit being inundated with needing to use AI for everything.
if they continue to share their work through open releases despite the leadership change, i hope we get to benefit with their work.
not quite optimistic about the result as i wonder if on aggregate we all consistently interact with computers the most efficient way possible. maybe to beat captcha or scraper detection through mimicry perhaps.
I hope this is widely hacked. If these employees are any good, someone will whip up a countermeasure that feeds absurdly wild and nonsensical data into Meta's fetid, gaping maw.
We’ve been moving towards a more and more tyrannical company controlled society for a long time and now they’re straight up doing hacking tactics to train machines to take our jobs. Doesn’t get much more bleak than that.
The presentation of the video and all the comments were on awesome cool ego-centric video understanding research that’s going to totally obsolesce human labor. I couldn’t get over how grim the video was. Here are some people in one of the least desirable positions in the world, and that’s not enough. Now they must labor without a shred of dignity, knowing they’re training their own replacements and likely not a thing they can do about it.
I’ve struggled to find enough freelance work to stay busy recently, but more than that I’m starting to feel a moral crisis. It’s getting harder and harder for me to feel like what we’re collectively doing isn’t absolutely fucked.
blink twice if you need help
Meta does require you to have a Facebook account. The expectation is that it is your personal fb that you use regularly. However, it doesn’t need to be. You can create a new fb account with a new gmail account and that’s fine. That’s what I did and some others do as well.
That said, 90%+ of employees end up using their real personal account because the language they use makes it seem like you couldn’t do what I described.
Also people use their work accounts and laptops to read their w2 and other sensitive info.
That said -- social media websites were later removed from the "work-related" list. So there was at least some recognition it was overreach and did not match the stated justification.
You can browser personal accounts from your phone.
I’m surprised this needs to be said out loud.
You don’t even need any to do something fancy in software. Could just be correlating mobile device presence with work laptop activity. Can triangulate physical location with a handful of Bluetooth or WiFi beacons.
unless you're in a jurisdiction that has anti-surveillance workplace laws, which if you don't should probably think about before Mark Zuckerberg gets the idea to monitor to your body temperature from below the waistline
- getting paid half the salary (EU)
I know which one most people pick.
Dogtraining? Dogwalking? Dogfeeding?
https://en.wikipedia.org/wiki/Dog_fighting
This is like going to work in a drug-lab where everyone is required to strip naked to ensure no "product" can be smuggled out. It's a zero trust environment at first blush, with the added terror of it being used to replace you with AI.
People working naked in a drug lab have more job security than meta employees and an equivalent level of respect and trust from their employer. However, they can't unionize because they have no legal protections. Their employer could literally point a gun at them if they complained. That isn't the case for Meta employees. Just sayin'.
Half my workday is me browsing random tabs while an AI agent does the actual work. They're going to train a model on alt-tabbing and scrolling HN/Twitter/Reddit.
Someone had to do it, distasteful though it may be. Could be quite hilarious what it learns in the process.
Meanwhile, nobody seems focused on capturing CEO’s data for AI training.
Imagine in 300 years we are still ruled by zuck, ellison, bezos, musk, thiel, et al, just in ai model form empowered by estates worth more than entire nations and legal protections designed to outlast heat death of the universe. Assuming there is still a "we" living on earth. Charitable assumption I guess.
They don't even understand what these people do.
It is delusion and lies all around.
“ The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.”
i've heard it described that evil is that which believes itself to be good without exception. i think i'm starting to agree...
As far as I understand, there is plenty of research there in disciplines raging from social studies through psychology to game theory and economics, as well as informal simulations, that strongly suggest that human interactions are positive to participants pretty much if and only if those interactions are repeated, which realistically only occurs if participants are circumstantially close already - same neighborhood, same job, family, friends, same school, etc.
One-off interactions are almost invariably toxic with at least one of the participants getting cheated, bullied, or otherwise harmed.
So the whole premise of connecting people unconditionally, including anonymously, automatically, and from opposite sides of the world is inherently broken and doomed to do a lot of damage.
So even Meta's self proclaimed mission is damaging to society if followed, what could possibly at that point be expected from what they actually do, given the combination of basic facts that the primary purpose of any business is to make money, Meta's specific notoriously evident disregard towards ethics, their position as an advertisement business and entertainment provider, being deep into enshitification and market saturation, and of course actual honest mistakes to boot.
I will say that I feel for the folks who work at Meta...I can't help but to feel they have long jumped the shark.
Being a terminal application, all interaction is trainable signal (unlike, say, cursor, which is an IDE and let users freely explore, edit the files, move the mouse. Model sees nothing of it, nothing to train upon).
So meta is doing the obvious, we want to train a computer use model, we need training data. Better to capture from employee than buying low quality data.
They 'trust me'. Dumb f*ks.
Technofascism.
Fixed it.
https://marshallbrain.com/manna1
They don’t add anything beneficial to society. They exist to sell ads.
Also, why are the investors not suing the legs off of Zuck for the whole meta verse debacle? It is a scam and pure fraud. Also dumb name, sue for that too. Should have just renamed it meeme.
the signal is every time a human has to grab the wheel. that's a label for what the agent still misses.
Seems like a strange approach in general. I'd have assumed you'd just have it use accessibility features to get at things, if there is no other interface.
[dupe] https://news.ycombinator.com/item?id=47851086
Sure, you can do everything a human can, but it also seems VERY inefficient
As an alternative, maybe you could just do network in/out?
The computer UI is the way it is because that is optimal for humans, if your plan is to replace humans why not just replace the whole stack os and all to something these models already know how to use?
[0] https://zed.dev/blog/acp-registry
Does the executive know better at this point but have toasted the culture and no one can fight against it anymore?
There is also this effect:
- CEO says "the lights are a bit dim in here"
- that turns into "We need to change all of the lightbulbs in here immediately!"
(this is especially true in firms where the CEO cares a lot about being proactive).
Two great posts/stories about this:
1. This post about smart employees "reading their managers minds": https://yosefk.com/blog/people-can-read-their-managers-mind....
2. In Michael Crichton's book Disclosure there is a great line: "Why did you dress casually instead of wearing a suit? Is it b/c you wanted to do that or b/c the CEO did it and you wanted to show you were part of the team??"
[1] https://x.com/TechEmails
What does this link tell you? https://www.thedailybeast.com/facebooks-sheryl-sandberg-told...
If you then think of crazy companies such as Palantir, something really has to be done about those entities. As a first step I suggest disbanding those companies, for many reasons, including wrong ethics.
I couldn't imagine life without my unique keystrokes and mouse movements.
Some call it museumverse.
Btw do they at least pay them extra for this spying or is it supposed to be for free? I mean if they paid at least 30-50% on top of the salary maybe I wouldn't mind doing it on dedicated meta computer.
Always thought Meta was a god awful run company and this just brings home the cake
Really though it seems reasonable to me. They want data to train AI, and their employees are obviously a large source.
They could already track your every click. They have root on your work MacBook. Most employers do.
I...admire the diligence
More proof that they do not care about you at all. This is Meta's way of moving fast and destroying everything at all costs.
Horseshit.
1. Employees are being asked to train AI to replace them.
2. Performance assessments will 100% be impacted. No question.
Thinking back on the OTT interview experience that Facebook helped pioneer, imagine making it through that, getting paid a massive sum of money BUT barely getting by on it because of the location, then they drop this crap on you?
Big Brother is always watching.
Optimizing ourselves to death.
Capitalism is asleep at the wheel with its foot stuck on the gas pedal.
I know you've long been hypnotized by libertarianism and the cult of the individual.
Maybe it's time you reconsider in light of the overwhelming evidence that the capitalist class is, in fact, not your friend.
The only known way for workers to assert their rights is collective action. Alone, you are weak and replacable. Together, we are strong.
It's time for a proper tech worker's union, to give us some fangs to claw back our dignity with.
> The tool will run on a list of work-related apps and websites and will also take occasional snapshots of the content on employees’ screens for context, according to one memo, posted by a staff AI research scientist on Tuesday in a dedicated internal channel for the company's model-building Meta SuperIntelligence Labs team.
ALL YOUR DATA IS BELONG TO US
¯\_(ツ)_/¯
Since this is a serious website: I'd be genuinely curious how mouse velocity and trajectories differ between cultural and environmental settings (apart from hardware, that's boring and should be normalized).
There was a time when studies made headlines that were exactly about the relationship between mouse movement, typing etc, and psychiatric disorders as well as physical health.
Obviously, both are related.
If you ask me, Ad tech would probably be able to tell your denominated faith using this data, when there's enough of it...
If they captured display output as well, it could be a very useful dataset for generalized computer use.
I was curious about this claim and I dug up this article from 2024. https://www.forbes.com/advisor/business/software/internet-su...
It's an employee survey so it's not resistant to claims that the number is higher than people know. But I think saying "on all the computers you're given" is an exaggeration at best.
I did think it was interesting that "One in three [employees] have had activity from their employer’s online surveillance used in their performance reviews."
Sounds like if you're being surveilled by your employer there is a good chance you know about it.
I've never experienced anything like that, so it's sort of a window into another world from my perspective.
Now imagine a society where your individual daily actions are recorded, reviewed and helpfully advised upon.
Millions of people making millions of actions each day and all recorded compared and sifted for positive feedback and improvement overall.
Just how far ahead would such a society pull compared to one that stays at today’s level. Compared to one that used totalitarian methods enabled by such surveillance?
The difference between Soviet and Western Europe was not the tech, it was the trust.
If we can build a society with f trust then this tech will turbo charge us.
If …