Data transmission is not the same as data processing and bandwidth from component A to component B is rarely the primary limiting factor in performance.
Optical/photonic computing is a fascinating topic, but thinking it's currently some sort of drop in replacement for electrical circuits, or that an interconnect constitutes a processor, are grievous fallacies.
A short article on the general topic...pay particular attention to the 'challanges' section:
https://www.findlight.net/blog/2019/02/01/optical-computing/
This is probably not that accurate, but
You've described the theoretical throughput of some sort of vague optical interconnect which does not remotely constitute a computer, let alone a product that Frontier could acquire.
Super computer does not equal super software.
Just because you have millions of LEDs and some fiber optic cable doesn't mean you could build a useful, much less super, computer with them.
The idea is to hook it up to a normal PC to drive something like a micro LED patch and use cheap materials to amplify refract and do anything else to the light and feed it back into the PC with something like a bunch of normal optical connections. This then using, hopefully, static errors in the stream to auto correct them on the way to the video card or monitor as data etc. The PC could be used as a means to store the calibrtion data while the new optical components do the heavy lifting.
The optical components you haven't described aren't doing anything except acting as a loopback.
Anyone have a TL;DR for us Twitter users?
OP is proposing a 1GHz, 16.7 million bit, optical interconnect that would provide vastly more bandwidth than the hardware it's connected to could use while harming performance by requiring latency introducing conversion at each end.
Well the means to easily change the data at nano second response times is just becoming common with MicroLed. It helps that it has much faster response times potentially.
Hell, you could even use the speed to double layer data into a read device to get data. if you can sacrifice some of the performance you could make it double colors at the end of the stream to get end results. Drive it with a video card propogated from a MicroLed patch of light and amplify it into a light stream. Double layer and use a read device to physically combine the results or similar into a new source. Then the driver can change the data and compute with light flashes at a cost of half the bandwidth. Instant calculations. Just need a simple way to get it to send the correct data bursts and translate the data into the optical computer logically. Maybe even translate normal code.
video card sends signals to MicroLed Strip(s). It displays corresponding light. Light is amplified and sent to a read device. MicroLed uses half of it's response time sending two halves of a piece of data. The read device combines them physically and derives the new data. Double Data Rate MicroLed.
Or you could just drive two separate ones and combine the streams at the end for the varied combined read data. Then you don't need to compute with odd optical devices. Just a bunch of read and writes combined to make different data and get the end result.
If you can have a device successfully read two light streams and get a desired result every time you can compute with a video card. Just need some software to simply send the correct data at much less than the end result and have the the optical side translate it for you into the greater data source and then use it. You could split stream data to other devices to translate the parts needed and get it as quickly to the monitor etc.
This is a bunch of treknobabble that features vague references of an optical computer (the 'read device') doing some sort of processing, but doesn't make any coherent mention of where this optical computer would come from.
The smallest LEDs are many orders of magnitude larger than modern transistors and you'd require far more than just an LED to make an optical transistor equivalent. You could use them as part of a set of logic gates that talked to each other with flashing light, but they'd be enormous, and enormously slow, compared to an equivalent integrated circuit.
Even real proposals for photonic transistors are likely to be limited to signal routing applications because they may never be competitive for processor logic. Faster theoretical switching speed, even if fully realized, likely wouldn't overcome the density penalty. If you can cram a hundred times as many electronic transistors in the same area, they are going to make a much faster processor, even if they cannot cycle as fast.