It’s been a whirlwind, to say the least – but it looks like the figure who was so central to chatGPT is going to be back in the fold, instead of moving on to Microsoft, like we thought just yesterday.
We’ve had this sudden reversal where the board reverses its decision to fire Altman as CEO, and we’ve learned a few things in the process.
First of all, there’s the shakeup that happens when one person is so critical to an enterprise. Some of the journalists writing about this series of events talk about mitigating key-person risks and making sure that if somebody is that mission-critical, the company has a plan to replace them if they leave!
Then there’s the controversy around Ilya Sutskever, often seen as the second in command as co-founder, who was a member of the firing board, but then immediately regretted his decision and signed a letter with hundreds of other employees demanding Sam Altman back.
That starts to explain why things happened like they did. But it still doesn’t fully explain why people went off half-cocked instead of thinking before acting. That story will probably be along later.
Anyway, we posted some of my thoughts yesterday around Altman’s temporary departure – and rather than reprint it, you can click here to see that article: we interviewed Altman at an event where he brought inspiration to a crowd of startup enthusiasts.
As I mentioned before, a lot of Altman’s values come through in the talk – his sense of humility, and his diligence in getting AI to the point that it’s at now. He also endorses caution, although qualifying that with some disclaimers, as you can see if you read the original piece. Instead of signing a letter, though, he suggests being careful in a comprehensive way, taking your time with new models, and working collaboratively on safety.
My takeaway was that Altman is valuable to the industry, and much-needed wherever he gives. It turns out he’ll be back at OpenAI, which makes sense, given that he has worked so hard on the project and been so instrumental in helping it succeed. Of course, we want to all pursue ethical and responsible AI, but recent events show us that sometimes, people following their impulses may be too hasty.