AI Can Now Learn 100x Faster Without Wasting Energy

AI is consuming more energy than ever, with data centers struggling to keep up with demand. A breakthrough training method could change everything, slashing energy use while maintaining accuracy.

By shifting from traditional iterative training to a probability-based approach, researchers have found a way to optimize neural networks with far less computation. This innovation, inspired by dynamic systems found in nature, has the potential to make AI much greener—without sacrificing performance.

AI’s Growing Energy Appetite

AI technologies, including large language models (LLMs), have become an essential part of daily life. However, the computing power needed to support them comes from data centers that consume vast amounts of energy. In Germany alone, data centers used approximately 16 billion kilowatt-hours (kWh) of electricity in 2020—about 1% of the country’s total energy consumption. By 2025, this number is projected to rise to 22 billion kWh.

New Method: 100x Faster, Similar Accuracy

As AI applications grow more complex, their energy demands will continue to rise, particularly for training neural networks, which require enormous computational resources. To address this challenge, researchers have developed a new training method that is 100 times faster than conventional approaches while maintaining the same level of accuracy. This breakthrough has the potential to significantly reduce the energy required for AI training.

Neural networks, which power AI tasks like image recognition and language processing, are modeled after the human brain. They consist of interconnected nodes, or artificial neurons, that process information by assigning weighted values to input signals. When a certain threshold is reached, the signal is passed to the next layer of nodes.

Training these networks is computationally intensive. Initially, parameter values are assigned randomly, often using a normal distribution. The system then repeatedly adjusts these values over many iterations to improve prediction accuracy. Because of the vast number of calculations involved, training neural networks consumes substantial amounts of electricity.

Smarter Training with Probability-Based Parameters

Felix Dietrich, a professor of Physics-enhanced Machine Learning, and his team have developed a new method. Instead of iteratively determining the parameters between the nodes, their approach uses probabilities. Their probabilistic method is based on the targeted use of values at critical locations in the training data where large and rapid changes in values are taking place.

The objective of the current study is to use this approach to acquire energy-conserving dynamic systems from the data. Such systems change over the course of time in accordance with certain rules and are found in climate models and in financial markets, for example.

Energy Efficiency Without Compromising Accuracy

“Our method makes it possible to determine the required parameters with minimal computing power. This can make the training of neural networks much faster and, as a result, more energy efficient,” says Felix Dietrich. “In addition, we have seen that the accuracy of the new method is comparable to that of iteratively trained networks.”

Source: https://scitechdaily.com/