5 Comments

Problem is the models *aren't* getting better and the AI companies are still heavily subsidizing their use. They're wildly expensive to run and the only way to improve a generative AI model is to feed it more training data. They already don't have enough without stealing from people. Thus its growth and use don't scale.

In a vacuum the tech is fine for generating boilerplate but I don't think anyone is going to be willing to pay the true cost for an "AI" chatbot that's only sometimes right and costs more than the rest of your productivity subscriptions combined.

Expand full comment

Youtube lost billions of dollars of revenue for 5 years after it's been acquired by google. It took a long time to become profitable.

Basecamp didn't even have a way to bill customers when it launched it's first paid offer, they just believed they would figure it out before the first payment triggers.

That's the usual american startup strategy: don't wait until you know how to make money, learn on the way and stay on top. And if you have investor money, burn that to go faster.

The US gov wants the country to keep the lead on AI because they think it's the main tool to dominate tomorrow's world, economically and militarily. So they will facilitate it.

You will see openai bleed money for years and years to come. But given how cheap they managed to make the o model series compared to the beginning (API prices have been slashed!), and knowing I and many others would pay 2 times the current price for it, the problem will be eventually solved.

I don't believe we will reach AGI in the next decade, but saying the models aren't getting better because they haven't in a year just show we got used to the wonders and miracles the tech industry have been pulling off repeatably for so long.

Expand full comment

I'm putting some water in my wine and starting to use LLMs more and more to help me when I have questions, but I'm still sceptical to be honest.

As the saying goes, ‘practice makes perfect’. I find it hard to believe that the new generation of developers addicted to LLMs still have the ability to think like us, who weren't born into this world. They'll be too used to copying and pasting what the LLM says for that. On the other hand, if LLMs become more and more reliable, maybe there won't be any need to really think? Maybe that's the inevitable evolution?

In short, I'm both happy and unhappy with the evolution of LLMs. We'll see what the future brings. :D

Expand full comment

If the LLMs become super effectives, then using them will be a form of compilation of thoughts into code, abstracting it away, in the same way memory management is abstracted in Python/Ruby/PHP/JS.

If they don't, eventually new devs will have to learn what's up, but they will be super effective à using LLMs for what they are good for.

Fusion research, the starlink constallation and the LHC show that there are still new engineers that are as smart as 80 years ago before the computer existed.

LLMs may very well dilute the pool of devs by making programming accessible to less technical or experienced people so that it feels your colleagues are not as good as before. But I'm betting the absolute number of smart people becoming programmers will actually increase.

Expand full comment

"If the LLMs become super effectives, then using them will be a form of compilation of thoughts into code, abstracting it away, in the same way memory management is abstracted in Python/Ruby/PHP/JS." ==> I sincerely hope it happens when I'm close to retirement. 😆

"But I'm betting the absolute number of smart people becoming programmers will actually increase." ==> I have major doubts on this point, but time will tell.

Expand full comment