AI is Getting Smarter and More Affordable

At the Future Artificial Intelligence Pioneer Forum during the 2025 ZGC Forum Annual Meeting, Kai-Fu Lee, Chairman of Sinovation Ventures and CEO of Zero One Everything, shared his perspective on the rapid evolution of AI. He described AI 2.0 as the most significant technological and platform revolution in history, with large AI models moving beyond research labs and transforming industries worldwide.

Kai-Fu Lee

AI-First Applications Are About to Take Off 

Kai-Fu Lee said that in the two years since ChatGPT was launched, the intelligence of large models has been continuously improving, and it seems that it has not yet reached its ceiling. At the same time, the reasoning cost of large models is rapidly decreasing at a rate of one-tenth per year, which provides a very important condition for the explosion of AI-First applications.

The models that did not perform well enough two years ago are now good enough. The models that were too expensive to infer two years ago are now very cheap.” Therefore, in Kai-Fu Lee’s view, Al-First applications will soon explode, and 2025 will be the first year when AI-First applications will explode and large models will become “king of landing.”

Blowout, 2025 will be the first year when Al-First applications explode and large models will become king.

A few months ago, Ilya, the former co-founder of OpenAI, publicly stated that the Scaling Law in the pre-training stage has slowed down. This is because the amount of data used for model training has reached a bottleneck, and there are also objective constraints in computing power. As the number of GPUs increases, fault tolerance issues and other factors lead to reduced marginal benefits.

Therefore, even if a large model with a large number of parameters is trained, such as GPT-4.5 released by OpenAI, the model performance is indeed improved, but the price of GPT-4.5 is 500 times that of DeepSeek-V3. This shows that the price of models with a large number of parameters is very expensive and the cost performance is not outstanding.

Fortunately, there is a new dawn in the industry. Scaling Law is shifting from the pre-training stage to the reasoning stage, which is the slow thinking mode.

Kai-Fu Lee said that the previous Scaling Law in the pre-training stage means that with more GPU and more data, the model can become smarter, but its growth trend has slowed down. The new Scaling Law of Slow Thinking means that the longer the model thinks, the better the results will be.

At present, it seems that under the slow thinking scaling law, the model performance is growing very fast and there is still a lot of room for growth.

Combined with these new technological innovations, the model training process has become very interesting. First, train a “liberal arts student” to read all the books, and then train it in the direction of science, so that the model can prove math problems and write code. The final “liberal arts and science” model will be very powerful.

AI is Now Teaching AI

In addition, Kai-Fu Lee also pointed out that the industry is now entering a very interesting era of “AI teaching AI”. He said that it took only three months from the release of OpenAI 01 to the release of o3. DeepSeek-R1 was also officially released two months after the release of OpenAI o1, and R2 may be released soon.

Whether from 01 to o3 or from R1 to R2, the speed of model iteration has been shortened to three months. One important reason is that we no longer rely solely on humans to invent new algorithms and model architectures. Instead, AI has the ability to reflect through slow thinking, and can self-iterate and self-improve,” said Kai-Fu Lee.

This means that AI has entered the self-evolution paradigm. Now, models with better performance can teach models with weaker foundations, and super-large parameter models can train models with smaller parameters.

Kai-Fu Lee likened this pairing to a “teacher” and a “student.” “The value of the super-large pre-trained model will be further reflected in the role of the ‘teacher model.’ After distillation, data annotation, and synthetic data, the performance of the model will be further accelerated in the future.”

In addition, Kai-Fu Lee also shared some of his observations on DeepSeek, in which he mentioned that China has ushered in its own “DeepSeek Moment”, which has greatly accelerated the full implementation of large models in China.

About nine months ago, Kai-Fu Lee once said in frustration that China has not yet had a “ChatGPT moment”. Although there have been models with good performance in the past, there is still a lack of a unique model that can support the flourishing of ToB and ToC applications, and that can make every company CEO ask the IT department “When can we access the big model in the enterprise?”

Now, after the market education of “DeepSeek Moment” by enterprises and users, the Chinese market has truly awakened, which has also cleared a major obstacle for the explosion of Al-First applications in China.

Kai-Fu Lee believes that one of the biggest bottlenecks in developing large-scale model applications in the past is the need for an education market. If a startup company needs an education market, it has little chance of success because the education market takes too long and the prospects are too limited.

The time required to educate the market is too long and the prospects are uncertain. Today, DeepSeek has completed the market education for China’s ToB and ToC markets, which provides a strong support for the explosion of AI-First applications.

The Future of AI Chatgpt

Looking ahead, AI is poised to become more integrated into everyday life, from enterprise applications to consumer tools. As AI continues to evolve through self-learning, reasoning-based improvements, and global adoption, 2025 is shaping up to be the year AI truly takes over.

1 thought on “AI is Getting Smarter and More Affordable”

  1. Pingback: The global Open Ai market revenue ,valuation and growth rate

Leave a Comment

Your email address will not be published. Required fields are marked *