KAYTUS Brings the Latest V2 Series Server Solutions for Emerging LLM/GAI Applications to AI EXPO KOREA 2024

7 months ago 6
ARTICLE AD BOX


Empowering Enterprises with Full-stack AI Solutions and Leading Liquid Cooling Technology to Build an Infrastructure with Higher Computing Efficiency and Energy Efficiency

SEOUL, South Korea–(BUSINESS WIRE)–KAYTUS, a leading IT infrastructure provider, will participate in AI EXPO KOREA 2024 as a Gold Sponsor, bringing its latest V2 series servers developed for emerging application scenarios such as cloud, AI, and big data, with the performance per watt improved by more than 35%. KAYTUS offers full-stack AI solutions for popular LLM and GAI application scenarios, creating a self-adaptive, intelligent infrastructure for users.


AI EXPO KOREA, themed on Quantum Jump, brings together over 35,000 industry experts from approximately 300 organizations to demonstrate, discuss, and promote the development and application of cutting-edge technologies. Deawon CTS and Etevers eBT will join KAYTUS to showcase more agile, cutting-edge product solutions.

Emerging Large Model and GAI Applications Create the Need for a New Infrastructure

As generative AI (GAI) is booming, large models such as GPT, LLaMA, Falcon, and ChatGLM drive increased social productivity as well as the transformation and upgrading of traditional industries. They not only increase the demand for more computing power but also expose the issue of low computing efficiency. For instance, the GPT-3 large model only achieves a computing efficiency of 21.3% when training on its GPU clusters with an energy consumption of up to 284,000 kWh. As AI chips for computing (with a power consumption of up to 1,000 W) and the computing density are improved increasingly, improving computing efficiency and lowering energy consumption have become challenges that data centers must address.

KAYTUS believes that higher computing efficiency is achieved through improvements in both measured performance and resource utilization. Servers’ software and hardware can be optimized collaboratively using application-oriented systematic design. The latest KAYTUS V2 series servers embrace diverse and heterogeneous computing, collaboratively optimizing software and hardware for real application scenarios to achieve higher computing efficiency, energy efficiency, and computing performance per unit power consumption with the performance per watt improved by more than 35%.

Full-Stack AI Solutions for Emerging AI Application Scenarios

KAYTUS provides full-stack AI solutions to meet the surging demand for computing power in LLM and GAI application scenarios, including cluster environment building, computing power scheduling, and large-model application development, to assist users in developing large model infrastructure.

MotusAI, the KAYTUS computing power scheduling platform, enables one-stop delivery for AI model development and deployment. By systematically optimizing resource usage and scheduling, training process and guarantee, and algorithm and application management in large model training, it is equipped with the ability of fault tolerance for training tasks, ensuring prolonged and continuous training. KAYTUS offers a diverse range of AI servers with industry-leading performance, increasing computing efficiency to 54% in large model training with thousands of billions of parameters involved and reducing training time by one week compared to the industry average. The AI inference capabilities are improved by 30%, maximizing the utilization of computing power.

  • KR6288V2, featuring 8 GPUs in a golden 6U space, allows users to achieve excellent performance and maximum energy efficiency. It is suitable for a variety of applications in large data centers, including large model training, NLP, recommendation, AIGC, and AI4Science.
  • KR4268V2 is one of the most flexible AI servers in the industry by supporting 100+ configurations. It features 10 DS PCIe GPUs in a 4U space, suitable for complex application scenarios, such as deep learning, metaverse, AIGC, and AI+Science.

A Complete Family of Liquid Cooling Servers for Continuous Improvement in System Energy Efficiency

The complete family of KAYTUS V2 servers supports cold-plated liquid cooling, including general-purpose servers, high-density servers, AI servers, and rack servers. It features 1,000 W single-chip cooling, liquid cooling of all key components, high-density deployment, and a PUE of nearly 1.0.

In addition, KAYTUS All Liquid Cooling Cabinet utilizes cold plates and a liquid cooling rear door that fully leverages natural cooling to truly enable “air-conditioning-free operation” and achieves a PUE as low as 1.05. It can support a power density of 100 kW per cabinet, which is more than 10 times higher than that in traditional data centers, while increasing space utilization by 5 to tenfold.

Date: May 1st – May 3rd
Location: Hall D COEX, Seoul

Booth: #11

Date: May 2nd
Location: Hall D COEX, Seoul

Seminar: Generative AI and ChatGPT, the Game Changer of this era

Spokesperson: EJ YOO, GM of KAYTUS Korea

About KAYTUS

KAYTUS is a leading provider of IT infrastructure products and solutions, offering a range of cutting-edge, open, and environmentally-friendly infrastructure products for cloud, AI, edge, and other emerging scenarios. With a customer-centric approach, KAYTUS flexibly responds to user needs through its agile business model. Learn more at KAYTUS.com

Contacts

Media

media@kaytus.com



Source link

The content is by Business Wire. Headlines of Today Media is not responsible for the content provided or any links related to this content. Headlines of Today Media is not responsible for the correctness, topicality or the quality of the content.

The post KAYTUS Brings the Latest V2 Series Server Solutions for Emerging LLM/GAI Applications to AI EXPO KOREA 2024 appeared first on Headlines of Today.

Read Entire Article