Originally posted on LinkedIn
AI could be the most dangerous tech since the nuclear bomb… and spoiler - it doesn’t look good ☠️
Yesterday an incredible piece came out in The New Yorker talking, VERY in detail, about Sam Altman, his career, and interactions he’s had with many people along the way.
Bullets:
- Pathological liar - apparently even his ex-partner compiled 70 pages of how he lied about everything (small and big things).
- Both at Loopt (his first company) and at YC and at OpenAI, employees and stakeholders asked the boards to fire him because of his behavior.
- A bunch of board members, acquaintances, and employees talking about their behavior and reservations about him.
- Details about how he went crazy with all the promise of making AI for the good of humanity.
- How he’s built relationships with mega-magnates from the Middle East (who gift him $20MM cars) to get investment.
- How he went from being a Democrat to Republican, how he became close to Trump, and how he secured the deal to provide AI for Pentagon war use.
- A bunch more things. It’s super long - but worth it.
▶️ Here’s the link to the HackerNews discussion where I found it in the comments (and there you’ll find the link to the official article and unofficial mirrors).
Coming from the New Yorker, I’m very very confident that it’s all verified and confirmed.
I feel that this obsession with “grow at all costs” always ends the same way - sacrificing everything in the name of money and power. What will happen in the AI race if people like Altman have their finger on the button?
Shouldn’t we entrepreneurs guide ourselves by ethics beyond money and growth? I believe you can have balance between mission, growth, and profit.
Unfortunately, the pressure to grow often trumps any mission that existed. Sometimes unconsciously… other times there was never more mission than power, recognition, and money.
What do you all think?