Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

How is generative AI impacting your infrastructure?

THE ARTICLES ON THESE PAGES ARE PRODUCED BY BUSINESS REPORTER, WHICH TAKES SOLE RESPONSIBILITY FOR THE CONTENTS

Provided by
Wednesday 31 January 2024 11:58 GMT
AI harbors the capacity to transform not just our professional lives but the very fabric of our existence
AI harbors the capacity to transform not just our professional lives but the very fabric of our existence (Getty Images/iStockphoto)

Siemon is a Business Reporter client.

The latest developments in AI/ML, and what your physical IT network needs to cope with these new demands

In the rapidly evolving realm of AI technology, new developments surface nearly every day. Clearly, AI possesses a significant capacity to transform our lives, a technology that spans chatbots, facial recognition, self-driving vehicles, and early disease detection.

The global AI market was valued at $142.3 billion in 2023, with finance, healthcare, and the high-tech/telco markets taking the lead in AI adoption.

AI is already being used to monitor data center assets, proactively detect faults and improve energy efficiency by driving better power usage effectiveness (PUE). And its not just being used by Hyperscalers, but also by many large enterprise companies.

InfiniBand versus Ethernet

Ethernet remains the prevailing global standard in most data centers. But an increasing number of today’s AI networks now use InfiniBand technology, although InfiniBand holds a mere fraction of the market share at present, primarily for HPC networks.

Competition is emerging between InfiniBand market leaders and prominent Ethernet switch and chip manufacturers, whose next-generation chips have been designed to construct AI clusters using Ethernet instead of InfiniBand. Regardless of the protocol chosen, both InfiniBand and Ethernet share requirements for high bandwidth and low latency, necessitating top-tier optical cabling solutions for optimal performance to support large language model (LLM) training and inferencing.

Exponential demands for power and bandwidth

Two of the key challenges that data centers are facing relate to extreme power needs and associated cooling requirements for the equipment, and the exorbitant bandwidth needs of the GPUs.

Supercomputers with GPUs running AI applications demand vast power and multiple high-bandwidth connections. These GPUs demand from 6.5kW to over 11kW per 6U unit. When contrasted with packed data center cabinets, averaging 7-8kW and maxing at 15-20kW per cabinet, the extent of AI’s power appetite becomes clear. Many of the leading Server OEMs are also offering servers with these GPUs.

These GPUs typically need connections with bandwidth of up to 8x100Gb/s (EDR), 200Gb/s (HDR) or 400Gb/s (NDR). Every node commonly has eight connections, equating up to 8x400G or 3.2 terabit per node.

How will IT infrastructure cope with these requirements?

Data center power and cooling demands are pushing network managers to reconsider their infrastructure. This often involves altering network blueprints and spacing out GPU cabinets further, potentially adopting end-of-row (EoR) configurations to better handle escalating temperatures.

However, this means an increased physical gap between switches and GPUs. To accommodate this, data center operators might need to incorporate more fiber cabling used for switch-to-switch connections. Given these extended spans, direct attach cables (DACs) are unlikely to be suitable as they are confined to five meters at most for such speeds.

Active optical cables (AOCs) are also a feasible choice thanks to their capacity to cover greater distances compared to DACs. AOCs offer the added advantages of significantly reduced power consumption in comparison with transceivers, as well as enhanced latency. Siemon provides active optical cables in increments of 0.5m, thereby enhancing cable management.

Transitioning data center backbone interconnections between switches will necessitate parallel optic technology to sustain increasing bandwidth demands. Several existing choices for parallel fiber optic technology employ eight fibers in conjunction with multi-fiber push-on connectivity (MPO/MTP fiber connectors). These MPO Base-8 solutions permit the adoption of either singlemode or multimode fiber and facilitate smooth migration to higher speeds. For enterprise data centers, contemplating a Base-8 MPO OM4 cabling solution is advisable when upgrading to 100Gb/s and 400Gb/s. Conversely, cloud data centers should select a Base-8 MPO singlemode cabling solution while transitioning to 400Gb/s and 800Gb/s speeds.

Innovative new fiber enclosure systems on the market can flexibly support different fiber modules, including Base-8 and Base-12 with shuttered LC, MTP pass-thru modules, and splicing modules. They allow for easy access and improved cable management.

In the realm of AI applications, where latency holds immense significance, Siemon suggests opting for “AI-Ready” solutions employing ultra-low loss (ULL) performance alongside MTP/APC connectors. The incorporation of ultra-low-loss fiber connectivity becomes pivotal for emerging short-reach singlemode applications (backing 100, 200, and 400 Gb/s speeds over distances exceeding 100 meters). This ULL connectivity effectively meets the more stringent insertion loss prerequisites set by AI applications, thereby enhancing the entirety of network performance.

Additionally, Siemon advises the adoption of APC (angled physical connect) fiber connectors, including the MTP/APC variant, for specific multimode cabling applications, alongside the traditional singlemode approach. The angle-polished end-face configuration of APC connectors (in contrast to UPC connectors) reduces reflectance, thus elevating fiber performance.

AI stands as a disruptive technology, yet it harbors the capacity to transform not just our professional lives but the very fabric of our existence—and data center operators need to prepare for it. Adopting measures to facilitate a seamless shift to elevated data speeds, and enhancing the energy efficiency of data centers, should be a particular focus. Those data center operators who adeptly brace for AI’s demands will find themselves well-placed to leverage the forthcoming prospects accompanying its evolutionary journey and its widespread integration.


For more information, visit siemon.com/ai.

(Siemon)

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in