Has Fdev considered building an optical computer?

Okay. I know this forum is a weird place.
But how did this thread go from "optical computers" to "licking thargoids"?
:oops::ROFLMAO::ROFLMAO::ROFLMAO:
Are you really sure you want to know?

Are you Really Really sure?

Well tough... You may want the truth but:

JackNicholson.jpg


OK!

Against my better judgement I'll take you on a guided tour down that particular rabbit hole, but to quote physics Youtube streamer Isaac Arthur, you might want to get a snack and a drink for this...

Very early in this thread @Nasc commented:
Optial things are boring. I want a computer that will let me taste space without electrocuting me or getting me kicked out of a store.

And thats how sticking your toung into inappropriate places came to this thread... so keep that in the back of your mind.

This was something @WR3ND picked up and ran with:
Spoiler: It tastes a little bit like burnt steak, apparently.

Cheers.

By the time I got my first reply into this thread, it had already picked up a bronto theme:
2^64x1,000,000,000= 1.86 Brontobytes throughput.

exaflopping brontobytes, right.

Soo.... since this thread was already well and truly derailed by the time I got involved with it, people were already talking about sticking their tounges into next generation monitors to taste the void, or what the void is currently hypothesized as tasting like, and bronto's, so you'd have expected me just to pick up on the bronto/burnt steak part of this. However fresh in the back of my mind was something @RELSPI posted way back in June last year, when she started what is possibly the most random thread title ever:

"I want to run my tongue along the Thargoid Hydra's spines"
The Medusa is sleek and undeniably one of the hottest Thargoid ships out there. But the Hydra has a kind of... niche appeal. The rotating parts, the spiked protrusions, they all come together to make something at once desirable and fascinating.
I haven't met one yet, but I'm almost certain that Hydras are piloted by even hotter guys than the Medusae (they're better ships, after all).

Thoughts?

So for my first post to this thread, I just took all the above sillyness and poured it in the rabbit hole, stirred in some salt, [plenty of that in this forum :p ], and made this:

Does space really taste of burnt stake, or is it burnt bronto(byte) burgers?

lf


But I don't want to get electrocuted finding out, so why don't we ask @Relpsi how she tastes thargoids, and what the void tastes like?

And thats how I added licking thargoids into the mix of nonsense contained herein. IIRC licking thargoids then became a recurring theme in the ensuing pages of nonsense this thread became, but yeah, that ones mostly on me.

So in essence the reason licking thargoids came into this thread couldbe summarised as being because of something Nasc said:
Optial things are boring. I want a computer that will let me taste space without electrocuting me or getting me kicked out of a store.
And me remembering RELPSI's "I want to run my tongue along the Thargoid Hydra's spines" thread and me just running with the general sillyness of this thread it seemed to be an, appropriate, reference to make.

Now, unless your choice of snack was a bronto(byte)burger like fred is tucking into in the above picture you should be good to go now :cool:
 
What if you have the driver do something simple and have a bunch of different objects then be driven with different results. The read side does the same and you make it so very simple software can run a very complex load. The read side can then use encryption logic to get stuff from multiple read data in a simplified manner with the options for many outcomes. How might make a lot of overhead but can you get a target throughput with it. Then you can make a bunch of optical connections to redundantly run for a bunch of results allowing the logical saving of the electronic software to gain from the optical hardware. Or does that come out even? You can simplify 24 bit down to mulitple encryption numbers. If you use the CPU cache to run and software on the other side to use it predicatably can you get over the normal throughput? The more optical nodes it should drive a bunch of simultaneious results making the read side then just have to pick the correct path. and get in essence a multiple of the write side effectively. Isn't that solvable on the other side potentially with less computes if the software is designed correctly?
 
What if you have the driver do something simple and have a bunch of different objects then be driven with different results. The read side does the same and you make it so very simple software can run a very complex load. The read side can then use encryption logic to get stuff from multiple read data in a simplified manner with the options for many outcomes. How might make a lot of overhead but can you get a target throughput with it. Then you can make a bunch of optical connections to redundantly run for a bunch of results allowing the logical saving of the electronic software to gain from the optical hardware. Or does that come out even? You can simplify 24 bit down to mulitple encryption numbers. If you use the CPU cache to run and software on the other side to use it predicatably can you get over the normal throughput? The more optical nodes it should drive a bunch of simultaneious results making the read side then just have to pick the correct path. and get in essence a multiple of the write side effectively. Isn't that solvable on the other side potentially with less computes if the software is designed correctly?

"Congratulations sir, you are in the running for the annual prize for the most continuous stream of nonsense, other high rated contenders are, Donald Trump and Deepak Chopra, good luck!"
 
What if you have the driver do something simple and have a bunch of different objects then be driven with different results. The read side does the same and you make it so very simple software can run a very complex load. The read side can then use encryption logic to get stuff from multiple read data in a simplified manner with the options for many outcomes. How might make a lot of overhead but can you get a target throughput with it. Then you can make a bunch of optical connections to redundantly run for a bunch of results allowing the logical saving of the electronic software to gain from the optical hardware. Or does that come out even? You can simplify 24 bit down to mulitple encryption numbers. If you use the CPU cache to run and software on the other side to use it predicatably can you get over the normal throughput? The more optical nodes it should drive a bunch of simultaneious results making the read side then just have to pick the correct path. and get in essence a multiple of the write side effectively. Isn't that solvable on the other side potentially with less computes if the software is designed correctly?

You know, I'm starting to think you are one of those software 'conversation programs' that's just had a major 'technical jargon' update.

Do you even read back what you have written before you post it? Have a go, go on, read back that last. Ask yourself why the term semantically null springs to mind.

I beg you.
 
I meant use one bit of logic on the starting end and have multiple version of the same thing shifted in each stream. Use multiple shifted streams and sort them for the correct data. Might be able to get effective higher throughput on the other end with the correct logic applied.

have the driver send data for one bit. Run multiple streams with shifted version of the data automatically without needing to compute. This gives multiple data point to retrieve on the other end.
 
I meant use one bit of logic on the starting end and have multiple version of the same thing shifted in each stream. Use multiple shifted streams and sort them for the correct data. Might be able to get effective higher throughput on the other end with the correct logic applied.

have the driver send data for one bit. Run multiple streams with shifted version of the data automatically without needing to compute. This gives multiple data point to retrieve on the other end.

See, what I read there was a whole bunch of words. What my mind did with them came out a bit like this: "I meant, blah, blah, blah-de-blah blah, other end".

Please stop. I mean, I can't seem to tear myself away from this thread yet you could as easily just post the word 'Chicken' in place of each of your seemingly demented ramblings and yet still impart the same level of information. Which is to say none at all.

I can see the possibility that you have what you might think is a pretty cool idea but the fact is that even if you do, you are so spectacularly incapable of conveying that idea in any meaningful form that all I am getting is just 'noise'.

Please, please, please:

JUST STOP.

Thanks, from the bottom of my heart.

Mike.
 
Last edited:
Anyone with more brains than a Piceous Cobble would have either:
  • stepped back from this thread and lurked sheepishly for a little bit
  • risen to my "go and make at least part of this optical computer (photon storage) work and come back when you have got recognition for making it happen, and gloat till your hearts content" challenge - and start building their vision
  • taken the feedback constructively and gracefully admitted "I never thought of that - back to the drawing board"
But to continue theory crafting, making it up as they go along in treknobabble:

tenor.gif
 
Optical hard drive. Drive simpler image data into a stream and use it in a way to get data out of it fast. This can be done over as many streams as needed. And can be stored base on a simple hard drive in an image form for light data storage. All the work done live over a stream that can be split as much as needed and manipulated to get maximum needed stream data.

BTW, the easy way to do this is to just make new hardware that uses optical outputs instead of using current systems to output video and stuff. If you use custom outputs it should grossly simplify the system. Then you are just driving an optical system and storing data permanently while using basically volatile optical processing.

You should even be able to derive what you need from a base non changing image with enough optical manipulation. It could only change for things like software upgrades. Outside of potential security problems. I wonder if that is one of the reasons it's not being used. There may be massive security problems natural to the way you would have to deal with the tech on a software side.

You could technically use custom designed hardware to get as many outputs as possible. Why haven't they done an optical tradition combined super computers. They might be able to add enough traditional outputs to deal with the massive data and have a more energy efficient super computer. If you can instantly get out 1/1000th of the data you could just get 1000 output devices.

I was just wondering if there was a way to get it to use input and output from a normal PC to enhance it significantly.

Is there a way to design redundant data that only needs to be run to do multiple things to lower the electronics sides work?

Could you use heavy GPU compute and a bunch of video cards with an optical input or something? Maybe you could achieve the desired throughput with condensed software. Can't you achieve effective throughput by getting an equivalent output. And do as much processing/manipulation on the optical bus.

Source: https://www.youtube.com/watch?v=UWMEKex6nYA

I don't get the need for ram though. Unless for a specific existing situation or something. You should be able to use a single basic process and use permanent storage to make ram HDD/transmition and other function at the same time. I'm pretty sure I'm right in being able to use simple static images as the basis for data. You can either leave a single stream of data as the basis and just manipulate it on the fly. Unless the ram represent a bunch of small points for this. At which point you don't need the optical CPU... Unless it's good for not having to rewrite software or something.

It mentions photonic wireing. Can you get a light based electrical system and in any form convert it into enough energy to power a fridge?! ><

We could add direct light to our solar panels or something odd. Just need night time lighting/battery storage. Existing power grid could be adjusted to night time use. Hopefully not skyrocketting in price. Light to batteries(light chargers) would be a sweet way to charge stuff. It's literally free electricity! Obviously you can use solar, but can you get more from directed light with optics being applied to increase it?

Ok, maybe what I'm describing would be a hard drive. If you have multiple streams each could reperesent a data storage point, or the entire storage device depending on size. The software logic I'm thinking of is a storage data retreival method to find data fast via presetup logic. And you could use a load time and simplified data for smaller storage like encryption storing as a pair of values sent though another. CPU could run 64x64 data and hypothetically send it to the GPU to start a light stream. Specialized beams would then split the data in different versions of itself to logically reduce seak tiem or make the desired data hit the end node correctly. Say you can look for RGB and have 1/3rd less chance to find wrong data as you could process 3 at once. Maybe fast seek methods to get it faster and make the environment work on an infastructure to speed it up. HDD process light or something in form that is as fast as possible. Maybe not an optical computer in the fastest sense, but maybe a hybrid electronic to optical system to get info to the GPU for it to use? You would not use heavy write but heavy read and minipulation to extract data. Write occurs from updates to normal software. No idea on the speed of that though. It could/would be similar to ram at the same time. Basically, infinite ram by modern PC standards. If it's slower maybe it could be system ram that is like a slower live system for drawing data to current ram or the CPU. AT which point it could do processing via intantly doing the calculation and leaving the correct data in the correct place for the CPU to grab using the massive bandwidth as processing. All data images would be static and basically be the software. Smaller and in a form that can be unencrypted and stored. Maybe using many layers of encryption and unpackaging for security. Potentially infinite encryption.

Didn't real much of this but it might be relevant: https://www.embedded.com/design-patterns-for-high-availability/

Maybe if it makes a good harddrive the people making SC could make it to ship their game on!! 8D
 
Last edited:
Optical hard drive. Drive simpler image data into a stream and use it in a way to get data out of it fast. This can be done over as many streams as needed. And can be stored base on a simple hard drive in an image form for light data storage. All the work done live over a stream that can be split as much as needed and manipulated to get maximum needed stream data.

BTW, the easy way to do this is to just make new hardware that uses optical outputs instead of using current systems to output video and stuff. If you use custom outputs it should grossly simplify the system. Then you are just driving an optical system and storing data permanently while using basically volatile optical processing.

You should even be able to derive what you need from a base non changing image with enough optical manipulation. It could only change for things like software upgrades. Outside of potential security problems. I wonder if that is one of the reasons it's not being used. There may be massive security problems natural to the way you would have to deal with the tech on a software side.

You could technically use custom designed hardware to get as many outputs as possible. Why haven't they done an optical tradition combined super computers. They might be able to add enough traditional outputs to deal with the massive data and have a more energy efficient super computer. If you can instantly get out 1/1000th of the data you could just get 1000 output devices.

I was just wondering if there was a way to get it to use input and output from a normal PC to enhance it significantly.

Is there a way to design redundant data that only needs to be run to do multiple things to lower the electronics sides work?

Could you use heavy GPU compute and a bunch of video cards with an optical input or something? Maybe you could achieve the desired throughput with condensed software. Can't you achieve effective throughput by getting an equivalent output. And do as much processing/manipulation on the optical bus.

Leaves quietly, never to return...
 
Back
Top Bottom