By Peter Zanatta
For most of its working life, the data centre ran on a simple idea, move data as electricity through copper and don’t overthink it. And to be fair, that idea worked astonishingly well. For decades.
But that era is ending. Modern data centres, especially those built for AI, are no longer just places where computation happens. They are places where vast amounts of data are constantly being shuffled around at ridiculous speed. Chips aren’t quietly crunching numbers anymore. They are shouting at each other all the time.
Here’s the uncomfortable truth, in many AI systems, more energy is now spent moving data than doing the actual thinking. When the plumbing starts using as much energy as the engine, you have a design problem.
Copper is not broken. It’s just tired. Pushing ever more data through electrical connections comes at a price. Higher power draw. More heat. More clever tricks to keep signals intact over ever-shorter distances. The system still works, but only just. At some point, it stops feeling like good engineering and starts feeling like wishful thinking.
This is where optical photonics enters. Light has carried data across cities and oceans for years. What’s changed is that optics is no longer stopping at the data centre door. It’s moving into the racks, into the switches, and increasingly right up next to the chips themselves. The goal is refreshingly simple, move more data, using less power, over longer distances.
Instead of electrons, you use photons. And that turns out to be a very good idea. Optical links can already handle 800 gigabits per second, with 1.6 terabits firmly in sight. Crucially, they don’t demand the same steep rise in power and complexity that electrical connections do. That matters enormously at hyperscale, where every watt is counted, argued over, and paid for many times over.
There’s another benefit that matters just as much, distance stops being an enemy. High-speed electrical links are fragile and short-tempered. Push them too far and they sulk. Light, by contrast, behaves itself. Once distance is no longer a hard limit, data centres don’t have to be packed together like nervous commuters. Compute, memory and storage can be placed where they make sense, not where copper insists.
That unlocks architectural freedom. Resources can be pooled, shared and rearranged as needed. Large AI workloads, which are sprawling by nature, benefit immediately. This is why photonics isn’t just a faster cable. It’s a different way of designing the whole system.
In the short term, most of the action is around bringing optics closer to the switching silicon. Co-packaged optics does exactly what it says on the tin, optical engines sit right next to network chips, shrinking electrical paths and cutting power use. NVIDIA has been very public about this approach in its networking platforms, promising massive scale without a matching spike in energy consumption.
Manufacturers are putting real money behind this shift. GlobalFoundries’ move to strengthen its silicon photonics capability is a clear signal that light-based interconnects are no longer a niche experiment. Startups like Ayar Labs and Lightmatter are pushing optics even closer to the chip, aiming to move data faster and with far less power than traditional electrical signalling.
And it’s not just suppliers driving this. Hyperscalers are under growing pressure from their own energy bills. Google, Microsoft and Amazon are building enormous data centre campuses to support AI, and the power demands are becoming a board-level concern.
This is why we now hear serious talk about small modular nuclear reactors and other exotic power sources. These discussions aren’t science fiction. They exist because the scale of demand is very real.
But there’s a sensible order to things. Before reinventing the power grid or parking reactors next to server halls, it makes sense to reduce demand first. Cutting the energy needed to move data inside the data centre is a far more practical step today than conjuring up entirely new sources of electricity.
This is where optical interconnects earn their keep. They move the same data using far less power, generate less heat, and reduce the burden on cooling systems. That translates directly into lower operating costs and fewer headaches for operators and local grids alike.
None of this happens overnight. Optical systems bring new suppliers, new testing regimes and new skills. Lasers don’t fail like wires, and engineers will have to adjust. The transition will be uneven, occasionally awkward, and sometimes expensive.
But the direction of travel is obvious. Copper is hitting physical and economic limits. Energy prices are rising. AI’s hunger for bandwidth is not going away. Faced with those pressures, data centres will keep choosing the option that scales further and wastes less power.
Not because it’s elegant. Because it works and one day, we may look back and realise that the modern data centre stopped being an electrical machine with a bit of fibre attached, and quietly became something else entirely, a system built on light, with copper only where it still makes sense.
Review