Has Fdev considered building an optical computer?

Perhaps a bit off topic, but I've always thought building a mechanical hydraulic computer, based on water pressure and the like instead of electricity, would be pretty cool in a weird sort of primitive tech hybrid way.

Probably the wrong sort of game for that though.
 
Last edited:
If we take what we've learned from Doctor Who and The Matrix, we could use half the human race to create a giantic organic super-computer and the other half could be used to power it.
 
Ironically:



I think Frontier can do better!

If you could hypothetical increase color depth you could beat that if you wanted too.

2^64x1,000,000,000= 1.86 Brontobytes throughput.

2^34*1000000000/1024/1024/1024/1024/1024/1024/8= 1.86264514923095703125

Only 33-34bits and one nanosecond response for exobit performance. 34 bit is not that hard potentially. And every 10 exponents adds 1024 increase to the next major thousand based value.

If the errors in such a machine are static also then it could be easy to calibrate even a sloppy machine to get the correct output. You use a normal PC to modify the stream on the way to a monitor and apply static changes. This could mean you don't need perfect things to make it work and you can use cheaper more reduntant methods to get the machine working fundamentally.

More ironically, it isn't an optical computer, maybe you should go to Oak Ridge and tell them they should use an optical computer instead. :ROFLMAO:
 

Not sure why you don't just split the light into as many light waves as needed and then read them with things you already have... I would think you want as little of the electronic system involved as possible. There should be lots of ways with varying materials to accomplish processing.

You can already absolutely read things faster than you could hope to change the source light. And you don't need very large read objects. Although you could make them larger if desired for cost or other reasons.
 
I read all that.

And wow, just wow.

I'm beginning to think the Duchess Le Chardon has spiked my nightcap dram with a couple of methanphetamine, or was it onionhead in my supper?

Does space really taste of burnt stake, or is it burnt bronto(byte) burgers?

lf


But I don't want to get electrocuted finding out, so why don't we ask @Relpsi how she tastes thargoids, and what the void tastes like?

I think @Noobilite has missed a trick here, instead of developing optical computing pretty much from scratch and having to port Elite to it, then when everyone else goes quantum, migrate from optical to quantum, why don't FDev DEVELOP quantum computing and focus on quantum entabglement based networking. This "Spooky interaction at a distance" negates ping permitting much better networking mitigating the undesirable effects of the games network acrhitecture. A couple of thousand Q-bits ~= most of the processing power on earth just now, so....

 
Nice. I'll have to spend some time checking it out in detail.

The Channelwood world in Myst along with the often used water analogies for electricity, circuitry, and electronic components made me think of making something like that.

Ah Myst, fond memories, I think that's where the OP is living.

If it was cheap, easy and inexpensive to build optical computers so much faster than these hokey old electronic pieces of crap we are all using, why aren't all the HPC's in the world optical computers? In fact why aren't all computers optical computers.
 
Well the means to easily change the data at nano second response times is just becoming common with MicroLed. It helps that it has much faster response times potentially.

Hell, you could even use the speed to double layer data into a read device to get data. if you can sacrifice some of the performance you could make it double colors at the end of the stream to get end results. Drive it with a video card propogated from a MicroLed patch of light and amplify it into a light stream. Double layer and use a read device to physically combine the results or similar into a new source. Then the driver can change the data and compute with light flashes at a cost of half the bandwidth. Instant calculations. Just need a simple way to get it to send the correct data bursts and translate the data into the optical computer logically. Maybe even translate normal code.

video card sends signals to MicroLed Strip(s). It displays corresponding light. Light is amplified and sent to a read device. MicroLed uses half of it's response time sending two halves of a piece of data. The read device combines them physically and derives the new data. Double Data Rate MicroLed.

Or you could just drive two separate ones and combine the streams at the end for the varied combined read data. Then you don't need to compute with odd optical devices. Just a bunch of read and writes combined to make different data and get the end result.

If you can have a device successfully read two light streams and get a desired result every time you can compute with a video card. Just need some software to simply send the correct data at much less than the end result and have the the optical side translate it for you into the greater data source and then use it. You could split stream data to other devices to translate the parts needed and get it as quickly to the monitor etc.
 
Last edited:
Perhaps a bit off topic, but I've always thought building a mechanical hydraulic computer, based on water pressure and the like instead of electricity, would be pretty cool in a weird sort of primitive tech hybrid way.

Probably the wrong sort of game for that though.
Works well with the 1ton Docking Computer :LOL:

Also tl,dr and I bet Elite wouldn't be the first application for something like that.
 
Data transmission is not the same as data processing and bandwidth from component A to component B is rarely the primary limiting factor in performance.

Optical/photonic computing is a fascinating topic, but thinking it's currently some sort of drop in replacement for electrical circuits, or that an interconnect constitutes a processor, are grievous fallacies.

A short article on the general topic...pay particular attention to the 'challanges' section: https://www.findlight.net/blog/2019/02/01/optical-computing/

This is probably not that accurate, but

You've described the theoretical throughput of some sort of vague optical interconnect which does not remotely constitute a computer, let alone a product that Frontier could acquire.

Super computer does not equal super software. 😐

Just because you have millions of LEDs and some fiber optic cable doesn't mean you could build a useful, much less super, computer with them.

The idea is to hook it up to a normal PC to drive something like a micro LED patch and use cheap materials to amplify refract and do anything else to the light and feed it back into the PC with something like a bunch of normal optical connections. This then using, hopefully, static errors in the stream to auto correct them on the way to the video card or monitor as data etc. The PC could be used as a means to store the calibrtion data while the new optical components do the heavy lifting.

The optical components you haven't described aren't doing anything except acting as a loopback.

Anyone have a TL;DR for us Twitter users?

OP is proposing a 1GHz, 16.7 million bit, optical interconnect that would provide vastly more bandwidth than the hardware it's connected to could use while harming performance by requiring latency introducing conversion at each end.

Well the means to easily change the data at nano second response times is just becoming common with MicroLed. It helps that it has much faster response times potentially.

Hell, you could even use the speed to double layer data into a read device to get data. if you can sacrifice some of the performance you could make it double colors at the end of the stream to get end results. Drive it with a video card propogated from a MicroLed patch of light and amplify it into a light stream. Double layer and use a read device to physically combine the results or similar into a new source. Then the driver can change the data and compute with light flashes at a cost of half the bandwidth. Instant calculations. Just need a simple way to get it to send the correct data bursts and translate the data into the optical computer logically. Maybe even translate normal code.

video card sends signals to MicroLed Strip(s). It displays corresponding light. Light is amplified and sent to a read device. MicroLed uses half of it's response time sending two halves of a piece of data. The read device combines them physically and derives the new data. Double Data Rate MicroLed.

Or you could just drive two separate ones and combine the streams at the end for the varied combined read data. Then you don't need to compute with odd optical devices. Just a bunch of read and writes combined to make different data and get the end result.

If you can have a device successfully read two light streams and get a desired result every time you can compute with a video card. Just need some software to simply send the correct data at much less than the end result and have the the optical side translate it for you into the greater data source and then use it. You could split stream data to other devices to translate the parts needed and get it as quickly to the monitor etc.

This is a bunch of treknobabble that features vague references of an optical computer (the 'read device') doing some sort of processing, but doesn't make any coherent mention of where this optical computer would come from.

The smallest LEDs are many orders of magnitude larger than modern transistors and you'd require far more than just an LED to make an optical transistor equivalent. You could use them as part of a set of logic gates that talked to each other with flashing light, but they'd be enormous, and enormously slow, compared to an equivalent integrated circuit.

Even real proposals for photonic transistors are likely to be limited to signal routing applications because they may never be competitive for processor logic. Faster theoretical switching speed, even if fully realized, likely wouldn't overcome the density penalty. If you can cram a hundred times as many electronic transistors in the same area, they are going to make a much faster processor, even if they cannot cycle as fast.
 
It could be relatively inexpensive. I'm sure components exist to do most or all of it. And if you can get near supercomputer speeds for something near the cost of a normal PC it would probably be worth it if it worked.
I'd say definitely not probably. :ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO:

In fact why stop there?
Why the hell aren't FDev running the game on a quantum supercomputer with room temperature superconductors?
And if we all upgraded our PCs and consoles to use cheap quantum-comms entangled to FDev's servers there wouldn't even be any network lag.

Get it done FDev!!! I expect that's it'll be part of the New Era. I've got a LEP so I assume that I'll get my quantum entanglement for free.
 
Back
Top Bottom