(1) Most AI demand is during training, which isn't just done once but future AI models will train some every single night to learn from that days tasks. Training can be interrupted and you can "store" the trained data as weights.
(2). A lot of the interconnect queue is for renewable capacity, which only runs 1/4 to 1/6 of the time. So 50 percent or less of the current grid is in the queue.
1a) Every time you ask the AI a question, it requires a small bit of heavy processing. Multiply that times the growing number of queries and you'll see the use of A.I. way outdistancing the training.
1b) Storing and then continuing is expensive. It can be done but it's a lot cheaper to run 24/7.
2) You're correct that the interconnect queue will only be needed at times. The problem is, when it is needed, it needs a very large capacity.
I've long said there are four fundamental components of the universe, time, energy, matter, and information. And they can be converted into one another. In computing, energy is transformed into information. And information can save tremendous amounts of energy, time, and matter. It's a good trade off, but we need the energy up front.
Think of an AI better designing a turbine for a powerplant - we pay the energy for the computation, and the matter to build the computer, but we gain time (the design comes much faster) and energy (the turbine becomes more efficient).
AI will pay off, but it needs a huge up-front investment.
All computation requires energy, and heavy-duty AI computation requires more energy.
Electricity is a service (kWh on demand when you need it), not a product (kWh).
So is computing.
"They get limited AI compute because we haven’t built enough power."
Sorry, we have to shut down your data center because the wind isn't blowing today.
2 mistakes:
(1) Most AI demand is during training, which isn't just done once but future AI models will train some every single night to learn from that days tasks. Training can be interrupted and you can "store" the trained data as weights.
(2). A lot of the interconnect queue is for renewable capacity, which only runs 1/4 to 1/6 of the time. So 50 percent or less of the current grid is in the queue.
1a) Every time you ask the AI a question, it requires a small bit of heavy processing. Multiply that times the growing number of queries and you'll see the use of A.I. way outdistancing the training.
1b) Storing and then continuing is expensive. It can be done but it's a lot cheaper to run 24/7.
2) You're correct that the interconnect queue will only be needed at times. The problem is, when it is needed, it needs a very large capacity.
There is more to consider here.
I've long said there are four fundamental components of the universe, time, energy, matter, and information. And they can be converted into one another. In computing, energy is transformed into information. And information can save tremendous amounts of energy, time, and matter. It's a good trade off, but we need the energy up front.
Think of an AI better designing a turbine for a powerplant - we pay the energy for the computation, and the matter to build the computer, but we gain time (the design comes much faster) and energy (the turbine becomes more efficient).
AI will pay off, but it needs a huge up-front investment.
All computation requires energy, and heavy-duty AI computation requires more energy.
This is, by the way, the true market for SMRs.