Data Center Connectivity Blog Part 1: 3 Different Technologies to Interconnect High-Speed Networks

When you pick up your cell phone, laptop, or other edge networking device, you don’t give a second thought to how you get your content. You just expect it to be delivered-fast. Cloud, big data, and high-performance computing (HPC) in data centers make this happen transparently at speeds ranging from 10G to 25G, 40G, 100G, and beyond. Network design engineers can choose from three different technologies for high-speed networking interconnectivity. Let’s explore these three technologies – Direct Attach Copper Cables [DACs], Active Optical Cables [AOCs], and Optical transceivers – to help you choose which to use in your network.

Distance Matters.

Direct Access Cables (DACs) transmit data fast over short distances (a 100G DAC spans up to 5 m). Think of two side-by-side racks with a top-of-rack switch connected to a server on the bottom of the rack.

Active Optical Cables (AOCs) drive data farther — up to 100 meters. Think of connecting two server racks in one room or linking subsystems together in system racks.

Optical transceivers push data even longer distances: up to 10 kilometers. Use these modular devices where cabling is pre-installed. Think of interbuilding connections or routing cable connections across a campus.

(Em)Power Your Network & Cut Costs.

DACs, AOCs, and Optical Transceivers consume less power than many other devices on the market. QSFP+ DACs use less than 0.003 W, AOCs consume about 2.2 W, multimode fiber transceivers use about 3.5 W, and singlemode transceivers use about 4 W. While this may not seem like a lot of power, an average data center can contain thousands of connections. If you save a few watts for each connection, this adds up fast for a lower overall power load. Saving just one watt per interconnect translates to a reduction of up to 250 watts per rack and a huge reduction in operating costs at the facility level.

Look at Latency.

Latency is the time it takes for a data packet to travel from its point of origin to the point of destination. DACs and AOCs feature low latency, or minimal data transfer delay, because they are all-in-one cable solutions with fewer connection points to get hung up on than modular transceivers. Transceivers have higher latency than DACs and AOCs, so it takes more time for data to travel from its starting point to its destination. In most applications, though, as an end user, you won’t notice the difference in latency between DACs, AOCs, and transceivers.

Minimize Maintenance and Maximize Interoperability.

Cables, routers, servers, and adapters must be fully compatible. Because AOCs, DACs, and optical transceivers are all plug-and-play solutions, setup is simple. DACs and AOCs are hot pluggable, so you can connect them to your system while it is running without any downtime. Black Box Optical Transceivers are hot pluggable also, but most optical transceivers on the market are not hot pluggable, meaning they will require you to shut down certain devices to connect them— so you need to be careful about which transceivers you select.

When considering maintenance, DACs and AOCs can be more cost-efficient and valuable because they are hardwired, while transceivers are not. Connectors on optical transceivers require regular manual cleanings. This makes them harder to maintain than DACs or AOCs.

Improve Airflow.

Cooling every device in your network can be difficult challenge, especially when you install equipment in high-density racks. If you follow best practices, your racks hold more than 50 cable connections linked to switches, servers, storage devices, and other network equipment. You might want to choose AOCs and Transceivers over DACs, because these optical options use thin, lightweight fiber cabling with a tight bend radius, and stay cool better than DACs do. To protect your investment, keep in mind that DACs’ thicker copper cable construction increases heat in a rack, so your equipment could overheat, causing costly damage.

Futureproof Your Network.

Some engineers like to standardize on singlemode fiber cabling, installing it exclusively in their data centers. They install it where they want and neatly route it through cable managers and then swap out the equipment as needed for seamless upgrades. These engineers prefer manual cable routing ease over the initial cost savings of switching to AOCs or DACs. This makes upgrades easy.

Other network engineers value the lower cost of AOCs and DACs and their advantages. But when they upgrade, they must remove the old AOC and DAC cabling, buy new AOC and DAC cabling and then re-route and organize the cabling after making the connections. This makes it more labor-intensive to upgrade.

Ultimately, it’s a matter of opinion and you need to decide what is right for your network in this situation.

So, Which Should You Choose?

Refer to the table below for a quick comparison of the things to consider when choosing which of the three types of technologies for high-speed networking interconnectivity to use for your network. All three of these technologies help your network deliver high-speed content to your users’ edge devices.

DACs, AOCs, and Optical Transceivers Comparison Table
  DACs AOCs Optical Transceivers
Distance up to 5 m up to 100 m up to 10 km
Power (approximate)  0.003 W 2.2 W 3.5 W (MM); 4 W (SM)
Latency low low medium
Maintenance hot-plug hot-plug hot-plug (not all vendors)
Interoperability hardwired hardwired modular
Cooling less cool more cool more cool
Futureproof less-initial cost less-initial cost easy upgrades

Now you understand the distance, power, latency, maintenance and interoperability, cooling, and futureproofing factors that you need to consider when choosing which of the three technologies to install in your data center. To look at two types of connectivity devices in more detail, check out the next blog in our series “Data Center Connectivity Blog Part 2: DACs vs. AOCs” to learn more or download our free white paper: Data Center Connectivity.

AOC Cables DAC Data Center Connectivity HPC Hyperscale Data Center
Subscribe Now