Attack the wave of AI application! TrendForce Forum Shows New Solutions for AI Servers and Computing Power

Research and development agency TrendForce held the "CompuForum 2025 Smart Link Drive, Release AI Computing Power" on Friday (20th), and invited experts such as Huaqiang Computer, Digital Infinity, Group Electronics, Supermicro and TrendFo...


Research and development agency TrendForce held the "CompuForum 2025 Smart Link Drive, Release AI Computing Power" on Friday (20th), and invited experts such as Huaqiang Computer, Digital Infinity, Group Electronics, Supermicro and TrendForce to give a lecture, focusing on the latest architecture trends and innovative applications of AI servers, and analyzing market changes, supply chain technologies, demand and the latest commercial machines.

In addition to the exciting forum, this time, the data storage manufacturer Seagate, Keysight Technologies, officially authorized the marketing provider Keda Technology, and smart recording device brand PLAUD.AI, also came to participate in the exhibition to share the company's latest products and solutions.

Huaxue builds a complete solution for enterprise AI factories

Huawei is not only a hardware supplier, but also a software provider. It combines NVIDIA's advanced technology and software hardware products to create a complete set of solutions.

When enterprise introduction of AI, it is necessary to operate these 4 points on data sets, hardware architectures, back-end connection AI tools and professional talents who need AI knowledge. None of them are indispensable for developing AI applications.

To this end, Huaxi sorted out the right direction and helped enterprises to successfully introduce AI. Zhou Yu-che, product technology manager for Huawei Computer AI/HPC, said that if the customer has insufficient data, the NVIDIA Omniverse, which is based on digital education, can create data sets in a virtual environment. Enterprises need to understand their own needs and build them with the ASUS AI Factory architecture, and can also use NVIDIA AI Enterprise to introduce the upper layer software to enable online services to be used by users.

NVIDIA proposed the concept of AI factory this year, hoping that every enterprise can build an AI factory if it wants to introduce AI. It not only deploys hardware equipment such as GPUs, but also integrates data into AI tools to create application services in top-end devices like ASUS AI POD, and provides users with users.

From GPU collection, storage collection, network architecture, management platform and application software, Huawei deeply integrates software and hardware, allowing enterprise customers to manage and monitor through a single interface.

How to deploy and detect server sets? This year, two sets of software are launched - ASUS Infrastructure Deploy Center (AIDC) and ASUS Control Center (ACC). The former is deployed in large quantities through one device, while the latter can monitor the hardware conditions.

AI-building computing power adjustment has become a key! Digital Infinite AI Stack Assist GPU Utilization Increases to 90%

With the increase in demand for AI and computing power, the world has entered the era of high-efficiency AI construction. Enterprises' demand for high-efficiency computing resources has surged, and GPU purchase costs have increased by 10 times than before. However, whether actual applications can fully realize their benefits has become the core challenge for enterprises. How to effectively adjust GPU resources and improve usage efficiency has become a key topic for AI basic construction.

Digital Infinite Executive Chairman Chen Wenyu pointed out that AI-based infrastructure not only needs to consider computing power and storage configuration, but how to effectively utilize and maximize the AI ​​computing power has become an important direction. If you cannot accurately grasp the usage status, it will lead to problems such as confusion in priority, difficult control, insufficient visibility, low utilization and high cost, and ultimately customers will still need to invest more GPU costs.

To solve the above challenges, digital unlimited launch of the AI ​​Stack platform, which has GPU cutting and aggregation technology, can split a GPU into multiple small units or integrate multiple GPUs. It should be trained on large AI models, implement dynamic resource allocation and isolation management, effectively improve GPU utilization, reduce resource lag, and improve development efficiency.

AI Stack also supports a decentralized training architecture, which can assign tasks to multiple nodes in parallel, significantly shortening model training time. In addition, the platform has real-time scheduling and loading adjustment capabilities, which can be adjusted flexibly according to the needs of each department, so that the current resources can be used immediately and the usage rate can be maximized.

Chen Wenyu emphasized that through AI Stack, enterprises can not only increase GPU utilization to 90%, and work load execution efficiency by 10 times, but also shorten the development environment construction time from two weeks to only 1 minute, comprehensively strengthening the efficiency of AI introduction and decision-making, reducing investment points, and being a key tool for enterprises to introduce AI services.

Group promotion aiDAPTIV+, Super Micro promotion DCBBS Jiefang Market

Due to the high hardware cost of AI servers, it often moves millions, which makes small and medium-sized enterprises and educational institutions look forward to it. In addition, there are doubts about security on the cloud on the data. In response, the group launched the "aiDAPTIV+" solution, focusing on the training and recommendation of the local AI model, and has become a low-cost, controllable and efficient alternative.

In addition to the enterprise application scenario, group communications are also focusing on the field of AI education. Group executive director Pan Jiancheng pointed out that current college students have no chance to touch the GPU because it is too expensive. To popularize AI education, we cannot just talk about theory, but also have opportunities for practical work. Group also stated that the company's AI Training PC (AI TPC) has been planned and deployed to major universities to enable AI teaching to be truly implemented..

Xu Yande, the Supermicro solution architect, introduced that the company's data center construction module solution solution (DCBBS) is recognized as the best choice for AI data centers and can meet the needs of modern enterprises for high-performance computing.

This solution helps build AI liquid refrigerated data centers more simple, reducing construction and operation costs, and allowing customers to easily build data center infrastructure with the fastest launch and online speed, and it can be deployed in as little as three months. In addition, the solution covers all key-based facility components, including servers, storage devices, networks, racks, liquid cooling systems, software, services and support, and can simplify and accelerate the construction of AI data centers, thereby reducing costs and improving quality.

TrendForce: AI servers have an annual synthesis rate of 24%, accounting for 20% of the market in 2028

TrendForce Research Manager TrendForce said that the global AI server market is expected to continue to grow strongly in 2025, with an annual shipment of 2.1 million units, with an annual growth rate of 24.5%. However, ground political risks and U.S.-China policy changes are still the main uncertainties of supply chains.

The system heat dissipation part, with the continuous expansion and deployment of platforms such as GB200 NVL72 and GB300 NVL72 in 2025, the penetration of liquid cooling technology will significantly increase, and it is expected to grow from 14% in 2024 to 31% in 2025.

Hyper CSPs are still the main driving force for growth in the AI ​​server market this year, with the average capital expenditure growth rate of the top four CSPs in North America reaching 38%. In order to reduce dependence on NVIDIA and AMD, CSP operators have accelerated the development of self-developed AI ASICs, which has also become one of the key reasons why NVIDIA has actively promoted NVFusion.

With high-performance computing (HPC) and AI applications becoming the two main axes in the server market, the shipping growth rate of AI servers will continue to be the first to be the general server. TrendForce expects that the global AI server market annual composite synthesis rate (CAGR) will reach 24% between 2023 and 2028, and AI server shipments will account for 20% of total shipments by 2028.

Extended reading: In June, demand for TV panels weakened, and the power supply of laptops increased Inventory sales drag down the top five enterprise-level SSD brand factories in 1Q25, waiting for AI demand to rebound quarter by quarter

Recommend News