Source: https://www.youtube.com/watch?v=ZOPgXRZSvzQ
Sure, but from not having ray tracing hardware (well, the 1080ti does it but isn't considered capable)
Pascal (the 10xx series) doesn't have any ray tracing hardware. There are fallbacks in RT APIs that will run ray tracing effects in conventional FP32 shaders, but actual hardware built to accelerate DXR, VRK, or most proprietary ray tracing is absent on anything older than Ampere (and conditionally Turing), RDNA2, and XeHPC.
See my brain says id rather conserve my hardware longevity instead of overlocking to achive 190 fps over 160fps. I still don't have monitors that do over 60. The tv only does 60. It all feels so redundant. Tweaking always is fun, but there has to be a point to even bother wasting all that experimentation time on it. Elite takes 2.5 mins (i think) to jump in and out of. Think about how long you spent on odd to work out all the stuff you did? To make something that i love(d) playable it was worth it.. i don't think id conclude that to increase the fps from 150 to 250 like you were doing
Longevity and overclocking aren't mutually exclusive things. Many parts are only capable of meaningful OCs in conjunction with reduced voltage and/or significantly improved cooling, while many parts that do end up pulling more power were incredibly overbuilt from the get go.
For example, my RTX 4090 is the cheapest model I could find on launch and has the second worst PCB of all twenty-odd RTX 4090 PCB variants I'm aware of. If I power limit it to ~33%
above stock, the components likely to fail first (the power stages) still have another 20% or so of headroom, if they were allowed to run at 125C. They run at least 40C below this, which translates into at least an of magnitude greater longevity, assuming no overt defects and all other things being equal. Of course, it's possible to push things further to see increasingly marginal gains, but for the majority of samples out there, diminishing returns are hit at settings that are notably more mild than stock. My CPUs are outright undervolted because they have performance caps that cannot be held at stock voltage.
The only reason stock parameters are the way they are is because they have to account for a minority of weak samples, combined with users who will feed them dirty power, run them in dangerously high humidity environments, and/or never perform even basic maintenance. I don't retain weak samples; I put my parts through their paces, turning them inside out and upside down. Any flaw that exists that I can correct I correct. When I find signs of a legitimate problem that I cannot economically correct, it gets exchanged for a good sample. Sure, I kill a lot of parts, but they were either defective out of the gate--waiting to be some less knowledgeable user's more serious problem--or they were ones that lasted long enough that I felt I could deliberately sacrifice to see what they could do after longevity became a moot point.
As for performance, every little bit helps. It's not always the difference between some overkill figure and some even more overkill figure. Even in Odyssey, my combination of GPU overclock and graphics settings is calibrated for 60-70 fps in worst-case GPU-limited scenes. And I am absolutely someone who can tell the difference between 60 and 100, or 100 and 200 fps. Though I can tolerate considerably lower, I'm not fully comfortable with most games until 80-90 fps and that's where I usually stop spending performance on more eye candy. If anything, I run Odyssey a little slower that ideal, mostly because the most demanding areas tend to be limited by other things, and I may as well spend the GPU performance on more eye candy that I normally would, if I can't get higher frame rates and more consistent frame times anyway. Odyssey also has atrocious aliasing issues, and running a resolution high enough to not make this infuriating is enough to push the best GPUs out there. My target is also 'full settings', but I don't feel constrained by a game's default presets in that regard, meaning the sky is the limit, at least for some areas.
Of course, tuning hardware (and the software I run on it) are major hobbies of mine too. I probably enjoy trying to get the mess that is Odyssey to work to my satisfaction more than I like actually playing Odyssey.
The only thing i know about ambient occlusion is i very much appreciate the effect, its critical, i always have mine as full as i can.. but anything greater than or equal to hbao+ is enough to suspend disbelief in a moving game when you're not looking at screenshots. I stop and take note when the ao is actually better.. like is colored depending on the surrounding and lights...
Good global illumination is immediately apparent to me because things often looks outright surrealistic without it. Anywhere you'd expect indirect lighting to be viable, that's a scene that will look night and day different with decent GI vs. bad or nonexistent GI.
Games like Odyssey and Starfield are good examples of poor/missing GI. You can turn on a flashlight and instead of lighting up the scene in a natural way, you have a cone of illumination surrounded by pure blackness, or some badly toned mapped glow that may as well be pure blackness. What indirect lighting that exists in most games has fairly extreme constraints on it. And when game light doesn't bounce like real light does, things look weird.
Overclocking these days is mainly out of the box (OC version of gfx cards) or via some clever ai/algo on CPU, or XMP on ram.
I mean you can go old-school all manual, but honestly personally I don't see the point unless you're a proper minmaxer enthusiast and want to squeeze every single FPS drop out of your box.
An OC version of a graphics card typically has a slightly elevated power limit and a very small positive offset applied to the stock frequency curve.
Most OCing these days is manipulating the algorithms already present, optionaly sumplementing them with parameters they cannot manipulate/take into account.
For example, the CPU overclock on my current (5800X3D) equipped main gaming system consists of running a negative curve optimizer offset so the automatic Precision Boost Overdrive feature will reach boost clocks it would otherwise need ~100mV more to reach. I then run the Fabric clock that connects the CCD to the IOD as fast as I can make unconditionally stable (increasing memory bandwidth and reducing latency) and utlize an undocumented feature (a bug) where disabling the LCLK (link clock, the IOD's internal connection to the PCI-E controllers and other assorted I/O) power management states also increases the PBO temperature margins. Lastly, I always manually tune all memory parameters, because I can, and it's significantly faster than any XMP/EXPO profile that could be applied (as those need to work out of the box across a range of systems and whole memory bins, while I'm tuning for this specific memory controller sample, on this specific motherboard, with these specific DIMMs).
In my case, peak instantaneous clocks are exactly the same as stock (the 5800X3D is capped to a relatively low multiplier), but because that peak boost state can be retained through almost any load (not just impossibly light ones at impossibly low temps), performance (vs. stock CPU and XMP settings on memory) ranges from marginal in very lightly-threaded loads that are indifferent to memory subsystem performance, to a 20%+ improvement in demanding memory-sensitive multithreaded tasks. If I had better cooling, I could add a BCLK OC into the mix and increase performance nearly linearly; though that would peak at a very low value, unless I also had a much more expensive board that had an external clock generator that could bypass the limitations implied by all the other stuff that normally gets thrown out of spec.
At the end of the day, my system is noticeably faster than stock, while running cooler, with less voltage, and less noise, in the majority of tasks (but able to stretch a bit further when it's beneficial as well). And this is all on an officially 'locked part'.
Fully manually OCs, where they are even possible, are usually less than ideal because they leave a lot of lightly threaded performance on the table. In parts that aren't capped at relatively low multipliers, there is pretty much no practical way to get all cores to run as fast as fewer cores could easily boost too.
If any one wants some more detailed examples of some approaches to OCing modern CPU platforms, I highly recommend Skatterbencher's stuff. Some examples:
We overclock the Intel Core i9-13900KS up to 6300 MHz with the ASUS ROG Maximus Z790 Hero motherboard and EK-Quantum water cooling
skatterbencher.com
We overclock the AMD Ryzen 7 7800X3D up to 5400 MHz with the ASUS ROG Crosshair X670E Hero motherboard and EK-Quantum water cooling.
skatterbencher.com
He's using fairly high-end support hardware and not striving for perfect stability, but the basic methodology (barring the segments that only apply to proprietary motherboard tricks), and most of the gains, can be applied to most similar setups.
As to wether it's worth it or not, that's pretty subjective. It's true that in the past gains were often greater and came at less effort, but that doesn't mean it's senseless now. Give me access to a platform I'm familiar with, and I'll get a usable, stable, OC on it in relatively short order. Validating it make take a bit of time, but only a miniscule amount relative to how long it will be used.