With the rapid growth of data and traffic, data centers have entered a new 100G era in recent years. In order to provide cloud computing services including artificial intelligence, virtual reality, 4K**, etc., many large-scale 100G data centers, such as cloud data centers, are being built on a large scale around the world. As a new and efficient infrastructure, 100G cloud data centers put forward higher requirements for internal connectivity and interconnection. This article will introduce you to effective solutions for internal connectivity and interconnection of 100G cloud data centers.
A cloud data center is a new type of data center based on cloud computing technology, in which computing, storage, and network resources are loosely coupled and configured. At the same time, it also has the characteristics of complete virtualization, high modularity, automation, and green energy saving of various IT equipment.
Typically, cloud data centers are owned and managed by cloud service providers that provide many different organizations with high-performance computing, networking, and storage resources and services that can be accessed over the network. As a result, cloud data centers tend to be larger and offer significant advantages in terms of resource allocation, operational efficiency, flexibility, and scalability.
With the development of 100G Ethernet technology, many cloud data centers have upgraded to 100G to improve data transfer speed and bandwidth density. In order to better achieve data exchange and throughput, 100G cloud data centers adopt a flat spine and leaf network architecture, which includes different levels of connectivity such as Spine Core, Edge Core, and Tor. Connections between different levels vary in transmission rate and distance, as shown below.
In order to improve data processing capabilities, 100G cloud data centers have put forward higher requirements for their infrastructure. These requirements include high speed, high density, low power consumption, and high availability.
High rateDue to the growth of data center traffic, high-speed equipment of 100G and above is required to meet the needs of large-scale data transmission.
High density: Cloud data centers are often large and need to save rack space to reduce construction and expansion costs. As a result, high-density devices such as compact transceivers with more ports are needed to reduce the number of switches required and increase transmission capacity.
Low power consumption: Low power consumption can effectively save energy and ensure normal operation through better heat dissipation.
High availability: 100G cloud data centers require high-availability equipment, such as future pluggable modules, to support future data center upgrades.
In order to promote the development of cloud computing business, here are some effective ways to solve the internal connectivity and interconnection of 100G cloud data centers.
The connection between the Tor switch and the server
In this case, the transmission distance is usually less than 5m, and the data rate is 10G 40G. The suitable solution is 10G 40G DAC AOC. DAC performs better in terms of cost, power consumption, and heat dissipation, while AOC has the advantages of lightweight, long transmission distance, and easy installation and maintenance. For short-distance transmission in 100G cloud data centers, 10G 40G DAC is a more cost-effective solution.
The connection between the Edge Core and the Tor
In addition to the 100G QSFP28 AOC that supports 100M transmission, this connectivity can also be achieved using a 100G QSFP28 SR4 optical transceiver with MTP MPO fiber patch cords. Compared to the 100G QSFP28 SR4, the 100G AOC can provide lower cost and power consumption. However, in addition to supporting longer transmission distances, the 100G QSFP28 SR4 also has additional digital diagnostics that make it outperform other products in terms of reception sensitivity.
The connection between the Spine Core and the Edge Core
For the connection between the backbone core switch and the edge core switch, there are multiple types of 100G QSFP28 optical modules to meet different transmission distance requirements. In addition, some QSFP28 transceivers are based on PAM4 technology, which can double the data rate while minimizing optical signal attenuation, making them an efficient and cost-effective solution for 100G cloud data centers.
The connection between the Core Router and Spine Cores
This connection is a data center interconnect (DCI) and has a long transmission distance. Recommended 100G DCI solutions include 100G coherent solutions and 100G PAM4 DWDM solutions. Coherent optical modules are very suitable for long-distance transmission (80km-1000km). The 100G PAM4 DWDM QSFP28 open line system is more economical and convenient in an 80km DCI network.
This article provides a variety of solutions for different connectivity scenarios in 100G cloud data centers, each with its own advantages and disadvantages, and cloud computing providers can choose according to their own needs. Hopefully, these 100G cloud data center solutions will help.