Network engineers today face a common challenge: how to evolve existing 100G infrastructure to 400G without breaking the bank or disrupting operations. With a plethora of optical transceiver options—from QSFP28 100G LR4 for campus backbones to QSFP28 100G ZR4 for metro links, and from OSFP112-400G-VSR4 for intra-rack connectivity to QSFP56-DD-400G-DR4 for spine-leaf fabrics—the path forward can be confusing. This migration guide provides a practical, step-by-step framework for upgrading your network from 100G to 400G, covering every major transceiver type including QSFP28 100G ER4, QSFP28 100G 100KM, QSFP28 100G BIDI 40KM, QSFP28 100G BIDI 80KM, QDD (QSFP-DD), QSFP112, and QSFP DD DR4. We will also address hybrid scenarios where 100G and 400G must coexist.
Before planning any migration, you must inventory your existing optical links. Most enterprise and data center networks today use QSFP28 form factors. Common 100G modules include:
QSFP28 100G LR4 – 10km, duplex SMF, LC connector. Used for campus backbones and inter-building links.
QSFP28 100G ER4 – 40km, duplex SMF. Used for metro aggregation.
QSFP28 100G ZR4 – 80km, duplex SMF. Used for regional DCI.
QSFP28 100G BIDI 40KM and QSFP28 100G BIDI 80KM – Single-fiber, used where fiber pairs are limited.
QSFP28 100G 100KM – Coherent or amplified modules for long-haul.
You also need to document fiber types (SMF vs MMF), connector types (LC vs MPO), and distances. This assessment will determine which 400G transceivers can directly replace existing optics and which require fiber plant changes.
Modern 400G optics come in three main form factors: OSFP, QSFP-DD (QDD), and QSFP112. Each has multiple reach variants.
For links under 100 meters—typically inside data center racks or between adjacent racks—OSFP112-400G-VSR4 is the most power-efficient solution. It uses 4 parallel multimode fibers (MMF) with 850nm VCSELs, consuming only 7-8W. The QSFP56-DD-400G-VSR4 variant offers similar performance in the QSFP-DD footprint, useful if your switch ecosystem is QSFP-based. When migrating from existing 100G SR4 (MMF) links, you can often reuse the same MMF cabling (OM3/OM4) but will need to upgrade the switches and modules to OSFP112-400G-VSR4 or QSFP56-DD-400G-VSR4.
For distances from 100m to 500m, QSFP56-DD-400G-DR4 is the industry standard. It uses four parallel single-mode fibers (MPO-12) with 1310nm DFB lasers. If your existing 100G links use QSFP28 100G LR4 over duplex SMF, you cannot directly replace them with DR4 because the fiber count differs (duplex vs 8 fibers). You would need to either re-cable with MPO trunk cables or use a conversion panel. A more practical approach is to keep LR4 for longer 100G links and deploy DR4 only for new 400G spine-leaf fabrics.
Although not the focus of this guide, note that 400G long-reach modules exist (e.g., 400G LR4 for 10km). However, for most migrations, it is cost-effective to retain 100G for long-haul and only upgrade to 400G for short/medium intra-data-center links. 400G ZR (coherent) for 80km+ is still emerging and expensive.
If your existing 100G links use multimode fiber (e.g., QSFP28 100G SR4 over OM3/OM4 with MPO connectors), upgrading to 400G is straightforward. Replace the 100G SR4 modules with OSFP112-400G-VSR4 (or QSFP56-DD-400G-VSR4) on both ends, ensuring the switch supports the new form factor. The same MPO-12 cabling can often be reused, as VSR4 also uses 4 transmit and 4 receive fibers. Link distance remains ≤100m. This is a simple "rip and replace" migration with no fiber re-cabling.
The most difficult migration is from QSFP28 100G LR4 or ER4 (duplex LC) to QSFP56-DD-400G-DR4 (MPO-12 parallel). The fiber plant is incompatible. Options:
Option A: Keep the 100G LR4 links as they are, and build a new 400G parallel SMF overlay. This is common when adding new spine-leaf capacity without touching legacy links.
Option B: Use a breakout conversion panel that takes one duplex LC (100G LR4) and converts to MPO-12? Not possible because LR4 uses 4 wavelengths on one fiber pair, while DR4 uses 4 parallel fibers without wavelength multiplexing. You would need an active media converter (e.g., a 400G to 100G muxponder). This is expensive and rarely justified.
Option C: Replace the entire fiber run with MPO trunk cables. High cost but future-proof.
For most organizations, the pragmatic answer is: do not try to convert existing LR4 links to DR4. Instead, deploy 400G DR4 on new switch ports and new fiber runs, and let legacy 100G LR4 links slowly phase out.
One elegant way to introduce 400G while still using existing 100G optics is breakout. A single QSFP56-DD-400G-DR4 port can be broken out into four 100G DR1 signals using a passive MPO-to-4xLC breakout cable. Those 100G DR1 signals can then connect to four separate QSFP28 ports (if the remote side uses 100G DR1 modules, not LR4). However, most existing 100G modules are LR4, which are not compatible with DR1. To make this work, you would need to replace the remote 100G LR4 modules with 100G DR1 modules (which are less common) or use a converter. Alternatively, some switches support 100G LR4 on breakout ports via internal gearbox, but this is vendor-specific. Always verify compatibility before planning.
For metro and long-haul links currently using QSFP28 100G ZR4, QSFP28 100G BIDI 40KM, QSFP28 100G BIDI 80KM, or QSFP28 100G 100KM, migration to 400G is not yet cost-effective. 400G coherent solutions (400G ZR) exist but are significantly more expensive per port. A better strategy is to aggregate multiple 100G long-haul links using DWDM muxponders, and upgrade the core to 400G only for intra-DC switching. For fiber-saving BIDI links, note that there are emerging 400G BIDI standards (e.g., 400G BIDI LR4), but they are not widely deployed. For now, retain your QSFP28 100G BIDI 80KM or 100KM modules until 400G long-haul matures (expected 2026-2027).
Consider a typical data center leaf-spine fabric with 48 leaf switches, each requiring two 100G uplinks to two spine switches. That’s 96 100G uplinks. If you upgrade to 400G spine ports with QSFP56-DD-400G-DR4, each spine port can replace four 100G leaf uplinks via breakout (if the leaf switches also support 400G or 100G DR1). The cost per gigabit of 400G DR4 is approximately 30-40% lower than 100G LR4. However, you must factor in new switch costs. Typically, 400G becomes cost-positive when you need more than 200 100G uplinks or when power/cooling savings justify the upgrade. For smaller deployments, staying with 100G QSFP28 may be more economical.
A mid-sized cloud provider had a data center with 100G spine-leaf using QSFP28 100G LR4 over duplex SMF (distances up to 300m). They wanted to upgrade to 400G without re-cabling. The solution: they replaced the spine switches with 400G-capable QSFP-DD switches, installed QSFP56-DD-400G-DR4 modules, and ran new MPO-12 trunk cables alongside the existing duplex fibers. The leaf switches initially remained on 100G LR4, connected to the new spine via breakout—but because LR4 and DR4 are incompatible, they used a small number of 400G-to-4x100G active breakout transponders (external gearboxes). Over time, they replaced leaf switches with native 400G leaf switches and QSFP56-DD-400G-VSR4 for short distances, eventually retiring the legacy 100G LR4 links. Total migration cost was 20% higher than a greenfield deployment but allowed phased investment.
Pitfall 1: Assuming QSFP112 is backward compatible with QSFP28. It is not. QSFP112 cages are different and only accept QSFP112 modules.
Pitfall 2: Mixing OSFP112-400G-VSR4 (MMF) with QSFP56-DD-400G-DR4 (SMF) on the same link. They are not interoperable.
Pitfall 3: Forgetting FEC requirements. 400G PAM4 links require RS-FEC; older 100G NRZ links may not. Ensure your switch configuration enables FEC on 400G ports.
Pitfall 4: Using QSFP28 100G BIDI 80KM on a fiber path with high reflections (e.g., dirty connectors). BIDI is sensitive to back-reflection; use APC connectors and clean thoroughly.
Pitfall 5: Assuming QSFP28 100G 100KM modules work in any QSFP28 slot. Many require special coherent-capable host platforms.
When selecting 400G switches and optics, consider whether they can support 800G in the future. OSFP and QSFP-DD both have 800G roadmaps (800G SR8, DR8). OSFP112-400G-VSR4 is already on the 112G electrical lane standard, which is the basis for 800G (8×112G). Similarly, QSFP112 can support 800G by using 8 lanes (though the form factor is small for thermal management). Investing in 112G-per-lane infrastructure today (e.g., OSFP112 or QSFP112) will ease future upgrades to 800G.
No. The form factors are different (QSFP28 vs QSFP-DD). Also, the optical interface (duplex LC vs MPO-12) is incompatible. You need a new switch and new cabling.
Not yet standardized. Some vendors offer 400G BIDI for short reach (e.g., 2km), but for 80km, 400G coherent ZR is the path forward, which typically uses duplex fiber.
You could install 400G ZR coherent modules (e.g., 400G ZR) if both ends support coherent optics. However, the cost is high. For most, keeping 100G ZR4 and adding a second 100G link (via WDM) is more economical.
OSFP112 typically runs 7-8W, while QSFP56-DD-VSR4 runs 8-9W due to the additional electrical lane gearbox overhead. OSFP112 is more power-efficient.
No. Both ends must be identical reach and wavelength pair. Mixing 40km and 80km may cause link errors or damage due to power mismatch.
Measure the end-to-end loss at 1310nm. DR4 requires ≤3.5dB insertion loss including connectors. If your loss is higher (e.g., 4dB), the link may still work with margin but is not guaranteed. Use an OTDR to check for high-loss events.
Coherent 100G modules often have vendor-locked firmware. Always buy from the same vendor as your switch or use a multi-vendor compatible coherent module from a third-party with broad support.
Migrating from 100G to 400G does not require a forklift upgrade. The most successful strategies are phased: first, deploy 400G in new spine-leaf fabrics using QSFP56-DD-400G-DR4 or OSFP112-400G-VSR4 where fiber and distance allow. Second, use breakout to interconnect with existing 100G leaf switches where possible, accepting some compatibility costs. Third, retain long-haul 100G links (ZR4, BIDI 40KM/80KM, 100KM) until 400G coherent becomes cost-competitive. Finally, retire legacy QSFP28 100G LR4/ER4 links as they reach end-of-life.
Our team specializes in end-to-end optical migration planning. We offer free site surveys, link budget calculations, and compatibility testing for all transceivers mentioned in this guide—from QSFP28 100G BIDI 40KM to QSFP56-DD-400G-DR4. Whether you are running a small enterprise campus or a hyperscale data center, we can design a migration roadmap that minimizes disruption and maximizes return on investment.
Contact us today for a no-obligation consultation. Let us help you move from 100G to 400G with confidence.
Headquarter address :Room 1603, Coolpad Building B, North District of Science and Technology Park, Nanshan District, Shenzhen,China.518057