
What are the options for data center interconnection?
- Categories:Industry News
- Time of issue:2022-07-26
(Summary description) To better meet the needs of cloud data centers, many data center network solutions emerge. For example, Huawei DATA center switches (CloudEngine series), Huawei DATA center controllers (iMaster NCE-Fabric), and Intelligent Network Analysis Platform (iMaster NCE-FabricInsight) provide the following two recommended data center interconnection solutions. End-to-end VXLAN scheme Data center interconnection based on an end-to-end VXLAN tunnel refers to the following: The computing and network of multiple DCS are unified resource pools and centrally managed by one Cloud platform and one iMaster NCE-Fabric. Multiple DCS are unified end-to-end VXLAN domains, and users' Virtual Private Cloud (VPC). Virtual private cloud (VSP) and subnet can be deployed across DCS to directly realize service communication. Its deployment architecture is 25G SFP28 Duplex as shown in the following figure. Schematic diagram of the end-to-end VXLAN solution In this solution, an end-to-end VXLAN tunnel must be established between multiple data centers. As shown in the following figure, Underlay routes must be interconnected between data centers. Secondly, in Overlay network layer, EVPN should be deployed between Leaf devices in two data centers. In this way, Leaf devices at both ends discover each other through EVPN and transfer VXLAN encapsulation information to each other through EVPN routes, triggering the establishment of end-to-end VXLAN tunnels. The end-to-end VXLAN tunnel diagram This scheme is mainly used to match the Muti-POD scenario. Point of Delivery (PoD) refers to a group of relatively independent physical resources. Multi-pod refers to an iMaster NCE-Fabric that manages multiple PODS. Multiple pods form an end-to-end VXLAN domain. This scenario applies to the interconnection of small data centers that are close to the same city. Segment VXLAN scheme Data center interconnection based on Segment VXLAN tunnel refers to: In the multi-DC scenario, the computing and network of each DC are independent resource pools, which are independently managed by their respective cloud platforms and iMaster NCE-Fabric. Each DC is an independent VXLAN domain. A DCI VXLAN domain is required for communication between DCS. In addition, users' VPCS and subnets are deployed in their own data centers. Therefore, service communication between different data centers needs to be orchestrated by upper-layer cloud management platforms. The following figure shows the deployment architecture. Segment VXLAN scheme architecture diagram In this solution, VXLAN tunnels must be established within and between data centers. As shown in the following figure, Underlay routes must be interconnected between data centers. Second, at the Overlay network level, EVPNs are deployed between Leaf devices inside the data center and DCI gateways, as well as between DCI gateways in different data centers. In this way, related devices discover each other through EVPN protocol and transmit VXLAN encapsulation information to each other through EVPN route, thereby triggering the establishment of Segment VXLAN tunnels. Segment VXLAN scheme architecture diagram This solution is mainly used to match multi-site scenarios, which are applicable to the interconnection of multiple data centers located in different regions, or the interconnection of multiple data centers that are too far apart to be managed by the same iMaster NCE-Fabric.
What are the options for data center interconnection?
(Summary description) To better meet the needs of cloud data centers, many data center network solutions emerge. For example, Huawei DATA center switches (CloudEngine series), Huawei DATA center controllers (iMaster NCE-Fabric), and Intelligent Network Analysis Platform (iMaster NCE-FabricInsight) provide the following two recommended data center interconnection solutions.
End-to-end VXLAN scheme
Data center interconnection based on an end-to-end VXLAN tunnel refers to the following: The computing and network of multiple DCS are unified resource pools and centrally managed by one Cloud platform and one iMaster NCE-Fabric. Multiple DCS are unified end-to-end VXLAN domains, and users' Virtual Private Cloud (VPC). Virtual private cloud (VSP) and subnet can be deployed across DCS to directly realize service communication. Its deployment architecture is 25G SFP28 Duplex as shown in the following figure.
Schematic diagram of the end-to-end VXLAN solution
In this solution, an end-to-end VXLAN tunnel must be established between multiple data centers. As shown in the following figure, Underlay routes must be interconnected between data centers. Secondly, in Overlay network layer, EVPN should be deployed between Leaf devices in two data centers. In this way, Leaf devices at both ends discover each other through EVPN and transfer VXLAN encapsulation information to each other through EVPN routes, triggering the establishment of end-to-end VXLAN tunnels.
The end-to-end VXLAN tunnel diagram
This scheme is mainly used to match the Muti-POD scenario. Point of Delivery (PoD) refers to a group of relatively independent physical resources. Multi-pod refers to an iMaster NCE-Fabric that manages multiple PODS. Multiple pods form an end-to-end VXLAN domain. This scenario applies to the interconnection of small data centers that are close to the same city.
Segment VXLAN scheme
Data center interconnection based on Segment VXLAN tunnel refers to: In the multi-DC scenario, the computing and network of each DC are independent resource pools, which are independently managed by their respective cloud platforms and iMaster NCE-Fabric. Each DC is an independent VXLAN domain. A DCI VXLAN domain is required for communication between DCS. In addition, users' VPCS and subnets are deployed in their own data centers. Therefore, service communication between different data centers needs to be orchestrated by upper-layer cloud management platforms. The following figure shows the deployment architecture.
Segment VXLAN scheme architecture diagram
In this solution, VXLAN tunnels must be established within and between data centers. As shown in the following figure, Underlay routes must be interconnected between data centers. Second, at the Overlay network level, EVPNs are deployed between Leaf devices inside the data center and DCI gateways, as well as between DCI gateways in different data centers. In this way, related devices discover each other through EVPN protocol and transmit VXLAN encapsulation information to each other through EVPN route, thereby triggering the establishment of Segment VXLAN tunnels.
Segment VXLAN scheme architecture diagram
This solution is mainly used to match multi-site scenarios, which are applicable to the interconnection of multiple data centers located in different regions, or the interconnection of multiple data centers that are too far apart to be managed by the same iMaster NCE-Fabric.
- Categories:Industry News
- Time of issue:2022-07-26
- Views:
To better meet the needs of cloud data centers, many data center network solutions emerge. For example, Huawei DATA center switches (CloudEngine series), Huawei DATA center controllers (iMaster NCE-Fabric), and Intelligent Network Analysis Platform (iMaster NCE-FabricInsight) provide the following two recommended data center interconnection solutions 25G SFP28 Duplex.
End-to-end VXLAN scheme
Data center interconnection based on an end-to-end VXLAN tunnel refers to the following: The computing and network of multiple DCS are unified resource pools and centrally managed by one Cloud platform and one iMaster NCE-Fabric. Multiple DCS are unified end-to-end VXLAN domains, and users' Virtual Private Cloud (VPC). Virtual private cloud (VSP) and subnet can be deployed across DCS to directly realize service communication. Its deployment architecture is 25G SFP28 Duplex as shown in the following figure 25G SFP28 Duplex.
Schematic diagram of the end-to-end VXLAN solution
In this solution, an end-to-end VXLAN tunnel must be established between multiple data centers. As shown in the following figure, Underlay routes must be interconnected between data centers. Secondly, in Overlay network layer, EVPN should be deployed between Leaf devices in two data centers. In this way, Leaf devices at both ends discover each other through EVPN and transfer VXLAN encapsulation information to each other through EVPN routes, triggering the establishment of end-to-end VXLAN tunnels 25G SFP28 Duplex.
The end-to-end VXLAN tunnel diagram
This scheme is mainly used to match the Muti-POD scenario. Point of Delivery (PoD) refers to a group of relatively independent physical resources. Multi-pod refers to an iMaster NCE-Fabric that manages multiple PODS. Multiple pods form an end-to-end VXLAN domain. This scenario applies to the interconnection of small data centers that are close to the same city 25G SFP28 Duplex.
Segment VXLAN scheme
Data center interconnection based on Segment VXLAN tunnel refers to: In the multi-DC scenario, the computing and network of each DC are independent resource pools, which are independently managed by their respective cloud platforms and iMaster NCE-Fabric. Each DC is an independent VXLAN domain. A DCI VXLAN domain is required for communication between DCS. In addition, users' VPCS and subnets are deployed in their own data centers. Therefore, service communication between different data centers needs to be orchestrated by upper-layer cloud management platforms. The following figure shows the deployment architecture 25G SFP28 Duplex.
Segment VXLAN scheme architecture diagram
In this solution, VXLAN tunnels must be established within and between data centers. As shown in the following figure, Underlay routes must be interconnected between data centers. Second, at the Overlay network level, EVPNs are deployed between Leaf devices inside the data center and DCI gateways, as well as between DCI gateways in different data centers. In this way, related devices discover each other through EVPN protocol and transmit VXLAN encapsulation information to each other through EVPN route, thereby triggering the establishment of Segment VXLAN tunnels 25G SFP28 Duplex.
Segment VXLAN scheme architecture diagram
This solution is mainly used to match multi-site scenarios, which are applicable to the interconnection of multiple data centers located in different regions, or the interconnection of multiple data centers that are too far apart to be managed by the same iMaster NCE-Fabric 25G SFP28 Duplex.
Scan the QR code to read on your phone
Related
-
New product release | SUNSTAR launch 50G Pon Combo OLT three -mode and optical device
With the scale of Gigabit broadband, 10G PON enters a large -scale deployment stage. At the same time, the industry is also deploying 50G PON to prepare for the era of the Wanzi era. Compared with 10G PON standards, the 50G PON standard can provide more than 5 times the access bandwidth, better business support capabilities (large bandwidth, low latency, high reliability), and the smooth evolution from GPON, 10G PON to 50G PON, And compatible with existing ODN networks as much as possible to protect the existing investment of operators. It is very important to support the compatibility of the three -mode compatibility and the Combo PON as a supplement, because operators have repeatedly stated that they do not want to interrupt the ODN network during the evolution of new technologies. new product release Sunstar first released 50G Pon Combo OLT three-mode and optical components (50G PON+XGS PON+GPON OLT), which meets the ITU-T G.9804.3 (2021) protocol standard. In QSFP and SFP modules, the optimized light road design meets better coupling efficiency indicators. This three -mode and optical device uses the ability of Guangheng to have mature packaging and manufacturing capabilities. From the overall metal structures, EML TO, DFB To, APD to, and OSA devices, they are all independently designed and processed by Sunstar. Product downlink wavelength: 50G 1342Nm EML, 10G 1577NM EML, 2.5G 1490Nm DFB; uplink wavelength: 25G 1286NM BM APD, 10G 1270Nm BM APD, 1.25G 1310Nm BM APD Company Profile Sunstar Communication Technology Co., Ltd. was established in 2001 with a registered capital of 40 million yuan and currently more than 1,000 employees. It is the industry's leading optical communication product manufacturer. Focus on the design and development, manufacturing, sales, technical support, optocoen modules of optoelectronic transceiver (OSA) and Optical Transceiver. After 20 years of technical accumulation and development, it has formed core technology design platforms such as light roads, mechanical structure, high -frequency simulation, thermal simulation, circuit, FPC soft board, and IT software automation. It has the vertical integration capabilities of the industrial chain from the optical chip BAR bar testing, cutting, and sorting, to-cap packaging, BOX device packaging, COB packaging, optical module packaging, optical fiber connector and precision optical machine processing component. The company's products are widely served in various types of application scenarios such as access networks, mobile wireless communications, transmission networks, data centers, cloud computing, HPCs, cable TV, security monitoring, smart grids, industrial and consumer sensing, and Lidar (Lidar). The business is located in China, North America, Europe, and Southeast Asia. Customers are mainly domestic and foreign equipment vendors and subsystem manufacturers. The company is also the world's leading optical fiber, optical cable and comprehensive solution providing Changfei Fiber Fiber Optical Cable Co., Ltd.'s holding subsidiary. - Sunstar Company won the "Top 10 Most Competitive Enterprises of China Optical Devices and Auxiliary Equipment in 2023" 05-24
- Good news! 100G optical module of Guangheng Communications Co., LTD won "2021 Outstanding Technology Award" 04-29
- Tel:86-(0)28-87988088
- Fax:86-(0)28-87988568
- Address:4th floor, building D1, Chengdu mould Industrial Park, No. 199, dada Road, West District, hi tech West Zone, Chengdu, Sichuan
Sunstar Communication Technology Co.,Ltd
Copyright © 2020 Sunstar Communication Technology Co.,Ltd All Rights Reserved 蜀ICP备19023203号-1