Did the article intentionally start with a LLM cliche to filter out all the people who hate reading obviously generated content? I would say it worked.
To answer a few people at once: I did mention compensation as a factor in the post, but I didn't elaborate details, so easy to miss. Comp is important of course, but so are the other factors. It feels like I can't go for a day without reading about the cost of AI datacenters in the news, and I can do something about it.
Again, many comments here saying I only care about the money, and while comp is an important factor I think it characterizes me as someone I'm not, and forgets what I've been doing for the past two decades. I've spent thousands of hours of my life writing textbooks for roughly minimum wage, as I want to help others like me (I came from nothing, with no access tech meetups or conferences, and books were the gateway to a better job). I've published technologies as open source that have allowed others to make millions and are the basis for many startups. I'm also helping pioneer remote work and hoping to set a good example for others to follow (as I've published about before). So I think I'm well known for caring about a lot of things during the past couple of decades.
It's okay to want to make money. You don't really have to justify it this hard unless you want people to really think you don't think comp is important, which is a bit sus to be fair.
The issue is that you're doing lot, but not saving the planet.
What do you think is happening with the efficiency gains? You're making rich people richer and helping AI to become an integral (i.e. positive ROI from business perspective) part of our lives. And that's perfectly fine if it aligns with your philosophy. It's not for quite a few others, and you not owning up to it leads to all kinds of negativity in the comments.
>What do you think is happening with the efficiency gains?
may it happen that the efficiency gains decrease demand and thus postpone investment into and development of new and better energy sources? If one couldn't get by just by bringing 20 trucks with gas turbines, may be he would have invested in fusion development :)
Great to see you're in Sydney Brendan, and let the haters hate.
You have done a brilliant job elevating your chosen specialty to the world, and encouraging and inspiring others in the industry for a long time - so you should be fairly compensated for that lofty position. I don't envy the late nights or very early mornings you have ahead of you on conference calls with SF, but good luck at OpenAI mate !
I mean, I don't know you well, but, I see your posts on here from time to time and from what I gather you are very, very, exceptional at what you do.
Reality is, these AI giants are here and they are using a massive amount of resources. Love them or hate them, that is where we are. Whether or not you accept the job with them, OpenAI is gonna OpenAI.
Given how much the detractors scream about resource uses, you'd think they'd welcome the fact that someone of your calibre is going in and attempting to make a difference.
Which, leads me to believe you're encountering a lot of projecting from people who perhaps can't land the highest of comp roles, and shield their ego by ascribing to the concept of it being selling out, which they would of course never do.
It's probably impossible to prove I'm not projecting..
However. I am putting my curious foot forward here:
What were the toughest ethical quandaries you faced when deciding to join OpenAI?
To give a purely hypothetical example which is probably not relevant to your case: if I had to choose DeepSeek or OpenAI, I think I would struggle with openness of the weights..
Ignore the haters (who sadly have become extremely common on HN now).
I loved your work back when I was an IC, and I'm sure this is a common sentiment across the industry amongst those of us who started systems adjacent! I still refer to your BPF tools and Systems Performance books despite having not written professional code for years now.
Can't wait to read content similar to what you wrote about when at Netflix and Intel albeit about the newer generation of GPUs and ASICs and the newer generation of performance problems!
It would be good if the performance improvements made can be applicable across the industry so everyone benefits. But it doesn't sound unbelievable that OpenAi may want to keep some of it secret to keep an advantage over others?
Thanks for taking the risk in this environment and posting about your experience from a personal standpoint. [environment: people will come at you from all angles with very passionate opinions]
> I stood on the street after my haircut and let sink in how big this was, how this technology has become an essential aide for so many, how I could lead performance efforts and help save the planet.
Brendan.
First of all congratulations on your new job. However,
It is easier to just say to everyone it is about the money, compensation and the stock options.
You're not joining a charity, or to save the planet, this company is about to unload on the public markets at an unfathomable $1TN valuation.
Interesting. Out of curiosity, how long do you think OpenAI can survive as a company? Put another way, what would be your guesses for probability of failure on 1yr, 3yr, and 5yr horizons?
EDIT: possibly a corollary--does Mia pay money for chatgpt or use a free plan?
The string "compens" appears exactly once in your post:
> But there are other factors to consider beyond a well-known product: what's my role, who am I doing it with, and what is the COMPENSation?
You did it for the money; don't try to rationalize it, because no one believes you. For that amount of cash, I'd probably jump on Altman's bubble for a year or two.
Humans are complex and have multiple sources of motivation. You don't know whether he took the offer with the highest pay. He's likely wealthy enough that he can pay less attention to his income and focus on his other sources of motivation if he wants to. That's not to say pay is not a factor in his choice, but it need not be the only or primary one. This is a luxury of the privileged for sure, which can make it difficult to relate to.
> ...it's not just about saving costs – it's about saving the planet
There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
Probably because "making the world a better place" has been a trope used so much in the industry that it's made it to a TV show [1]. It's fine to be passionate about your job. It's fine to be paid well. You don't need to make us believe that you're mother Theresa on top of it.
Reminds me of when I was younger and thought of companies like Google and Tesla as a force for good that will create and use technology to make people's lives better. Surely OpenAI and these LLM companies will change the world for the better, right? They wouldn't burn down our planet for short-term monetary gain, right?
I've learned over the years that I was naive and it's a coincidence if the tech giants make people's lives better. That's not their goal.
Right? Like what an incredibly naive thing to think, that BG is going to contain power consumption lmao. OpenAI is always going to run their hardware hot. If BG frees up compute, a new workload will just fill it.
Sure you might argue "well if they can do more with less they won't need as many data centers." But who is going to believe that a company that can squeeze more money from their investment won't grow?
Tangentially, I am looking forward to learn the new innovations that come from this problem space. [Self-rightous] BG certainly is exceptional at presenting hard topics in an approachable and digestible manner. And now it seems he has an unlimited fund to get creative.
I imagine there's a lot more to be gained than that via algorithmic improvements. But at least in the short term, the more you cut costs (and prices), the more usage will increase.
> I also supported cloud computing, participating in 110 customer meetings, and created a company-wide strategy to win back the cloud with 33 specific recommendations, in collaboration with others across 6 organizations.
> My next few years at Intel would have focused on execution of those 33 recommendations, which Intel can continue to do in my absence. Most of my recommendations aren't easy, however, and require accepting change, ELT/CEO approval, and multiple quarters of investment. I won't be there to push them, but other employees can (my CloudTeams strategy is in the inbox of various ELT, and in a shared folder with all my presentations, code, and weekly status reports). This work will hopefully live on and keep making Intel stronger. Good luck.
Firstly, you would do well to read the guidelines about avoiding snark, and then actually say whatever it is you’re trying to say rather than make insinuations. As is, this response comes across as a very shallow read. It’s hard to get to the root of what you’re actually saying in your post other than it quotes two paragraphs about how it’s not fun to push through the bureaucracy of a large organisation, which - I would agree. Probably most people who’ve worked at a big company would.
So why does that make him a “big shot”? Are you perhaps envious of him?
Why does openAI deserve him or anyone? Hard to say.
Brendan, I'm a big fan of your book, and work.
I don't have a problem with you joining OpenAI; best of luck there!
However, I'm not sure your analysis is quite correct, in this case.
If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.
So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.
For people who’s main computing devices are phones, this isn’t hard to believe at all.
Interacting outside of the tech bubble is eye opening. Conversely, the hair stylist might have mentioned the brand of a super popular scissor supplier/other equipment you’d have never heard of.
> She was worried about a friend who was travelling in a far-away city, with little timezone overlap when they could chat, but she could talk to ChatGPT anytime about what the city was like and what tourist activities her friend might be doing, which helped her feel connected. She liked the memory feature too, saying it was like talking to a person who was living there.
This seems rather sad. Is this really what AI is for?
And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
A common cited use case of LLMs is scheduling travel, so being able to pretend it’s somebody somewhere else is for sure important to incentivize going somewhere!
Same. I have a lot of ideas I like to explore that people find boring or tedious. I used to just read, but it's pleasant to have the option to play with those thoughts more.
It’s super dope, and you can have it talk to people for you in the local language when you go there. I’ve busted it out to explain what I’m thinking for me. Watching travel shows on TV or reading travel magazines is sadder.
Brendan can do whatever he wants. Hes that good. If anybody seriously needed to interview him 20+ times to figure it out, then the burden is now on them to not fuck it up.
He's summing interviews across all AI giants. But the ones about to IPO can interview someone almost infinitely many times, because everyone wants on the bandwagon.
Apparently, there's this guy who's really good at optimizing computer performance and makes a lot of money doing it. At the same time, he writes mediocre school essays that are actually a bit embarrassing. Guys, if you have the opportunity to land a very well-paid job, then do it. Take the money. Live your life. But please spare us the public self-castration.
If it's in your power, make sure user prompts and llm responses are never read, never analyzed and never used for training - not anonymized, not derived, not at all.
I think OpenAI will IPO at 1T. I don’t want to say bubble but it could be one of these stocks super hyped that never goes anywhere after the IPO(I.e airbnb during Covid)
What do you think is happening with the efficiency gains? You're making rich people richer and helping AI to become an integral (i.e. positive ROI from business perspective) part of our lives. And that's perfectly fine if it aligns with your philosophy. It's not for quite a few others, and you not owning up to it leads to all kinds of negativity in the comments.
may it happen that the efficiency gains decrease demand and thus postpone investment into and development of new and better energy sources? If one couldn't get by just by bringing 20 trucks with gas turbines, may be he would have invested in fusion development :)
https://en.wikipedia.org/wiki/Jevons_paradox
You have done a brilliant job elevating your chosen specialty to the world, and encouraging and inspiring others in the industry for a long time - so you should be fairly compensated for that lofty position. I don't envy the late nights or very early mornings you have ahead of you on conference calls with SF, but good luck at OpenAI mate !
Reality is, these AI giants are here and they are using a massive amount of resources. Love them or hate them, that is where we are. Whether or not you accept the job with them, OpenAI is gonna OpenAI.
Given how much the detractors scream about resource uses, you'd think they'd welcome the fact that someone of your calibre is going in and attempting to make a difference.
Which, leads me to believe you're encountering a lot of projecting from people who perhaps can't land the highest of comp roles, and shield their ego by ascribing to the concept of it being selling out, which they would of course never do.
However. I am putting my curious foot forward here:
To give a purely hypothetical example which is probably not relevant to your case: if I had to choose DeepSeek or OpenAI, I think I would struggle with openness of the weights..I hope there will be harder problem waiting for you, than using flamegraphs to optimize GenAI Porn.
https://www.axios.com/2025/10/14/openai-chatgpt-erotica-ment...
I loved your work back when I was an IC, and I'm sure this is a common sentiment across the industry amongst those of us who started systems adjacent! I still refer to your BPF tools and Systems Performance books despite having not written professional code for years now.
Can't wait to read content similar to what you wrote about when at Netflix and Intel albeit about the newer generation of GPUs and ASICs and the newer generation of performance problems!
> There's so many interesting things to work on, things I have done before and things I haven't.
What are the things you haven’t done before, if you could mention them?
Brendan.
First of all congratulations on your new job. However,
It is easier to just say to everyone it is about the money, compensation and the stock options.
You're not joining a charity, or to save the planet, this company is about to unload on the public markets at an unfathomable $1TN valuation.
Don't insult your readers.
No, it never does. Those people somehow delude themselves into thinking it might, but...it might just work for us.
EDIT: possibly a corollary--does Mia pay money for chatgpt or use a free plan?
My wife was paying for ChatGPT before I joined. I didn't ask Mia. I probably have three months of hair growth before my next chance to ask.
The string "compens" appears exactly once in your post:
> But there are other factors to consider beyond a well-known product: what's my role, who am I doing it with, and what is the COMPENSation?
You did it for the money; don't try to rationalize it, because no one believes you. For that amount of cash, I'd probably jump on Altman's bubble for a year or two.
Wanna buy a bridge?
There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
[1]: https://www.youtube.com/watch?v=B8C5sjjhsso
I've learned over the years that I was naive and it's a coincidence if the tech giants make people's lives better. That's not their goal.
Sure you might argue "well if they can do more with less they won't need as many data centers." But who is going to believe that a company that can squeeze more money from their investment won't grow?
Tangentially, I am looking forward to learn the new innovations that come from this problem space. [Self-rightous] BG certainly is exceptional at presenting hard topics in an approachable and digestible manner. And now it seems he has an unlimited fund to get creative.
> I also supported cloud computing, participating in 110 customer meetings, and created a company-wide strategy to win back the cloud with 33 specific recommendations, in collaboration with others across 6 organizations.
> My next few years at Intel would have focused on execution of those 33 recommendations, which Intel can continue to do in my absence. Most of my recommendations aren't easy, however, and require accepting change, ELT/CEO approval, and multiple quarters of investment. I won't be there to push them, but other employees can (my CloudTeams strategy is in the inbox of various ELT, and in a shared folder with all my presentations, code, and weekly status reports). This work will hopefully live on and keep making Intel stronger. Good luck.
OpenAI deserves these big shots.
So why does that make him a “big shot”? Are you perhaps envious of him?
Why does openAI deserve him or anyone? Hard to say.
The problems are interesting and the pay is exceptional. Just fucking own it.
https://news.ycombinator.com/newsguidelines.html
However, I'm not sure your analysis is quite correct, in this case.
If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.
So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.
https://en.wikipedia.org/wiki/Jevons_paradox
How could she not know?
Interacting outside of the tech bubble is eye opening. Conversely, the hair stylist might have mentioned the brand of a super popular scissor supplier/other equipment you’d have never heard of.
This seems rather sad. Is this really what AI is for?
And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
Or, you know, Signal/Matrix/WhatsApp/{your_preferred_chat_app}. If you're already texting things, might as well do that.
I guess I'm a dinosaur but I think emailing the friend to ask what they are actually up to would be even better than involving an LLM to imagine it.
Asynchronous human to human communication is a pretty solved problem.
> I'd been missing that human connection
At OpenAI.
I don't think that indicates that any one company interviewed him 20+ times.
You're in for a surprise buddy.
Just say you joined for the money and that Intel's stock didn't do a 10,000x run like Nvidia did and he completely missed it.
So the best chance at something like that again is OpenAI when they achieve a 1TN valuation with AGI.