To meet ever-growing bandwidth requirements of service provider and data center networks, 100 Gigabit Ethernet was officially standardized in July 2010 under IEEE 802.3ba. In response, Gigalight's engineers deliver industry-leading, standards-compliant,100G pluggable optical transceivers.
Modern data center traffic is becoming more and heavier, and east-west flow between servers if you haven't adopted the new two-tier Ethernet switching architecture called leaf-spin. They get you the highest density interconnects between data center switches and to the outside world. Let's see which ones belong where at the bottom we have the leaf switches and servers.
The 100G QSFP downlinks on the leaf switches break out into 425G QSFP connections one for each server copper cables are lowest cost solution for this distance typically less than 5m for the uplinks spine switches. We have some choices but as tied to the type of fiber cable infrastructure you choose or have already installed. If you have multimode fiber, you can use SR4 up to 100m, remember SR4 requires parallel fiber with MMF MPO connectors. For single mode fiber, you can use PSM4 or CWDM4. PSM4 goes up to 500m and CWDM4 goes up to 2km, don't forget the PSM4 is a parallel fiber format and uses SMS and MPO connectors if you only need 30m of reach the don't worry about installing fiber active optical cables will do the trick for the spine uplands to other data centers and data center layers use LR4 assuming you need up to 10km reach on doing fiber SMS CWDM4 works here too.
Short downlinks to 25G server ports can use copper breakout cables. Multimode fiber links between leaf and spine can be used SR4. Single mode fiber links between leaf and spine in either PSM4 or CWDM4.
评论
发表评论