I can understand all that your saying, but I see people with their high end PC's complaining of stutters, fps lower than 20-30, or any number of other problems. This (to me at least) seems most likely due to running the game at higher resolutions and higher requested frame rates (to match high refresh rate monitors for example) because normally their hardware can handle them. It's just that the game engine can't. On the other hand, I see people like me with older hardware, running the game without (or very few) problems at all. I know I'm only running it at 1080p 60Hz, but my machine is proportionally lower spec, and I get 50-60 fps for example.Many people with high-end hardware have high-end hardware because they are picky about performance.
I've been extremely vocal about Odyssey's poor performance. That doesn't mean I'm not seeing better performance than 99% of people, or that my newer/faster systems aren't performing much better than my older systems. It means the performance I'm seeing doesn't meet my standards.
Nothing you've said here refutes my statement. You are not going to see a performance decrease if you put a newer, faster, CPU or GPU in your system. Faster hardware (assuming it's applicable to some actual bottleneck) is still going result in a performance uplift, all other thinks being equal. Seeing a lower frame rate at higher resolution and quality settings on newer hardware doesn't mean the older hardware is somehow better.
Such scenarios should be phenomenally rare, barring some serious misconfiguration, extreme optimization targeting very specific hardware, or the need to emulate a legacy ISA. Even in the latter two case, performance of newer hardware often quickly outpaces old to point that ancient software is still faster on the newer hardware, after accounting for overhead and inefficiencies. The fastest processor I have for old DOS applications isn't something contemporaneous to those applications, and hasn't been for twenty-plus years, it's my Ryzen 7 5800X. Likewise, the fastest performing GLide accelerator I have isn't my Voodoo 4 4500, it's my RTX 3080 with a wrapper.
Regardless, Odyssey performs best on the fastest modern hardware that exists and this is readily demonstrable. I have some current-gen hardware that I'm confident will beat any system of any prior generation, probably by a wide margin. Give me exact settings, including any configuration files used, and a repeatable in-game scenario and I'll record it and/or include detailed performance logs.
Like someone said above (sorry, can't find the post or remember who it was) I'm sure if those high spec machines tried running it at 1080p and 60Hz like me, then they too would probably have little to no problems. But then, what would be the point of having a high end machine only to run a game at lower settings?
It just seems to me that the game engine struggles with the higher res/refresh rates more than it does with lower ones.

As a side note, regarding my experience with a faster machine giving lower performance, I have this little story (OK, it's more of 'faster broke things' than just a performance drop, but it illustrates my point)...
We had clients on Unix servers based on Intel 486 CPUs (yes this was about 20 or so years ago). Our software was written in-house and ran on these servers. The 486s were old and we wanted to sell upgrades to our clients. So we got in some nice new IBM Pentium 3 based machines and proceeded to test our software on them. Well, we were getting corrupted databases, weird system hangs, and a bunch of other weirdness. We thought there was a problem with the hardware, so tried on other P3 machines, but got the same results.
It turned out, after a few days of testing this and that without success, that it was our software that couldn't handle the faster CPUs in the new servers. Apparently, our programmers had used certain tricks to make things work as they needed, but when used on the faster hardware, things like variables were being written to the database before they were being populated with the correct data. I'm not a Unix programmer (or whatever language they used) so I couldn't tell you what it was they had been doing, but after they had figured it out, and re-wrote our softweare to compensate, everything worked fine again.
It was the fact the original software wasn't written for the newer, faster hardware in mind, that was causing all our problems.
It turned out, after a few days of testing this and that without success, that it was our software that couldn't handle the faster CPUs in the new servers. Apparently, our programmers had used certain tricks to make things work as they needed, but when used on the faster hardware, things like variables were being written to the database before they were being populated with the correct data. I'm not a Unix programmer (or whatever language they used) so I couldn't tell you what it was they had been doing, but after they had figured it out, and re-wrote our softweare to compensate, everything worked fine again.
It was the fact the original software wasn't written for the newer, faster hardware in mind, that was causing all our problems.