While you're here - how did you go about licking thargoids without elecroting yourself? Just something that came to mind earlier in this thread...
More likely to be Guardian technology for optical computing....Maybe we need some Thargoid technology to make these optic computers.
The actual science of it, no, this is not silly.I don't think anyone is being unreasonable.
More likely to be Guardian technology for optical computing....
But will it run Crysis?
Now please don't be silly. On to a more serious question on the subject of optics computing:
If a bonfire on top of a hill 10 km away is relaying optical data with a maximal theoretical throughput of 250 Gigaembers per second in the white/yellow spectrum, how long would it take to download the entire Read Dead Redemption 2 in a caleidoscope with a 50 mm aperture? And how long should be the caleidoscope tube to store the entire game?
Asking relevant questions here, for science.
Becuase you could simplifiy the data down to image data and possibly simplify the method of manipulating it quite a bit.less.
I haven't heard you or anyone else go into a single side of the software of the specifics
While you're here - how did you go about licking thargoids without elecroting yourself? Just something that came to mind earlier in this thread...
Now please don't be silly. On to a more serious question on the subject of optics computing:
If a bonfire on top of a hill 10 km away is relaying optical data with a maximal theoretical throughput of 250 Gigaembers per second in the white/yellow spectrum, how long would it take to download the entire Read Dead Redemption 2 in a caleidoscope with a 50 mm aperture? And how long should be the caleidoscope tube to store the entire game?
Asking relevant questions here, for science.
I’m still laughing at exaflops.
I know, but it still sounds ridiculous.Oh, that's actually a real word.
FLOPs == Floating-point operations per second
'Exa-' is a number prefix -> Mega-, Giga-, Tera-, Peta-, Exa-, etc.
'Exascale' is the current target and buzzword in high performance computing; as a single multi-socket server can now do well over a TeraFLOP/s and most Clusters have a peak performance somewhere in the PetaFLOP range, exceeding one ExaFLOP/s is obviously the next step.
E.g.: The current leading system of the HPC top 500 is the 'Summit' cluster at Oak Ridge, with ~2.5 million CPU cores and a theoretical peak performance of ~200 PetaFLOP/s.
Wouldn't these sorts of things be useful for any software where you need a lot of computation and the end result is a relatively simple answer? Or where you can wait for the lag for the end result because you don't need it live. Not sure what that would be though. Would it aid in graphics processing of very large projects? Say, if you had really large storage separate for something like a movie. Or medical research?
I thought there were ways to in essence get complex results by simply applying them to a monitor result instead of storage or as a data value also. If you don't need to do certain things with the end result does it matter.
Puff Puff Give!Puff, puff, pass.
Emphasis on the pass.