If anyone was running a high-end GPU and CPU before, power consumption is not likely to be a deal breaker with new stuff. Price might be, but not power. The real concern is if you have an older or lower quality PSU, or a model that has low OCP limits preventing it from handling reasonable power excursions...but that would likely have been an issue with an RTX 3080/3090 or RX 6800/69xx part as well.
Even the RTX 4090 shouldn't need a new PSU if one's current one is half-way decent. Transients are more tightly controlled than the past generation and the issue with the 12VHPWR cables is related to early implementations of the 12-pin connector (one should stress test these to ensure they don't get hotter than they should under peak load, new PSU or not), not the 8-pin side of adapters or PSUs themselves having issues. I was fully intent on running an RTX 4090 on a 650w SFX-L PSU (and would have power to spare) until I decided to work in better margins to keep the PSU from overheating if I need to block of some of it's intake air flow with the ducting I'm planning on using to keep the CPU heatsink from ingesting heated GPU exhaust.
At stock limits performance per watt is going to increase significantly with the Ada and RDNA3. Stock limits are invariably way past the point of peak efficiency, so a modest undervolt and/or underclock will reduce power far faster than it reduces performance. You can set whatever limits you like and be essentially guaranteed 50-100% more performance at the same arbitrary power limit as one would expect from a prior generation part.
CPU side is only slightly different. For example, the Ryzen 7000 series are allowed to pull well in excess of 200w to ensure they are all-round competitive with Intel's 12th gen and AMD's own 5800X3D, but the new manufacturing process and modest architectual changes still result in a fundamentally more efficient chip. This is clearly evident when capping power; at 65w (about 30% of normal limits) the 7000 series CPUs reach 70-85% of full performance and the performance advantage they have over 65w Zen 3 parts (relative to uncapped ones) increases significantly. However, I still wouldn't recommend the Ryzen 7000 series parts for a pure gaming build until the v-cache models show up (and would see how those fare against Raptor Lake); they don't offer enough here relative to either the 5800X3D or Alder Lake.
Between AMD and NVIDIA on the GPU side, I'm leaning toward NVIDIA mostly because I'll be able to get one sooner. At the top-end I expect the 4090 to be slightly faster in rasterization and still significantly faster in RT (though exact degree of this is still highly speculative) than the top RX 7000 series launch part. NVIDIA also has a few new features that are tempting. On the other hand, the lack of DisplayPort 2.0 is a glaring omission for NVIDIA. Yes, DSC is 'visually lossless', but NVIDIA GPUs have numerous caveats when using it. I'm not sure if these will still apply to Lovelace, but if you're using DSC, you can only attach two monitors to current NVIDIA cards, and using DSC knocks out all forms of DSR. Since I expect to soon have displays (240Hz+ 4k) that cannot do native resolution and refresh rate via DP 1.4a or HDMI 2.1 without DSC, the lack of these limitations on AMD's upcoming GPUs are making RDNA3 very tempting.
With regard to AMD's drivers, I'm not concerned about their quality in general. If anything I've found AMD easier to work with (the third party tuning tools are actually better and the parts less locked down) in this regard for the past few years. However,
Elite: Dangerous 4.x is an outlier application that has major problems with recent AMD GPUs and drivers (even before the Orange Sidewinder problems the line AA issue was, and is, annoying), and I am far from certain of an AMD-side fix to these.
Nvidia's 4000 series is, in the words of Steve from Gamers Nexus, currently a "clown car" of a GPU.
This was more in reference to the absurd marketing and naming conventions of NVIDIA and their AIBs than technical aspects of the hardware.
Even the 3090 is not worth it for the power usage. But some 'gamers' just need 300 fps
as long as you don't need or want the latest and greatest
An RTX 3090 can't consistently deliver 60+ fps (in purely GPU limited scenarios) at some surface settlements in
Odyssey with the settings I currently use...settings that still don't adequately address the aliasing issues the game has. I'm not even sure an RTX 4090 will be fast enough to do what I want to do with Odyssey. That said, I'm not buying a GPU for Odyssey specifically.
I have several other games, most of which are not new, that do not run to my satisfaction (and I'm targeting ~70 fps P1% frame rates) on any extant GPU.