12th gen 13th gen vs ryzen 7000 series etc

ASUS being ASUS doesn't have much to do with this round of hardware; they've been playing fast and loose with default settings, overvolting the crap out of "auto", and having predatory support policies, for more than twenty years. Every few generations I give them another shot because they do have one of the best firmware teams, when it comes to righting firmware for overclocking, but I'm usually hit with some critical flaw that bricks the board, usually early and often enough that sending it back will rival the cost of a new board...then I go back to modestly less frustrating boards for a couple of years.

The main difference is that when you kill older CPUs by throwing a few hundred extra mV at them with non-functional safety mechanisms, they die in a manner that shorts them to ground before they get hot enough to physically destroy their packaging.

The new GN video does point out something I didn't realize...EXPO doesn't even contain a vSOC table, so even that is entirely down to the board manufacturer.
 
Source: https://www.youtube.com/watch?v=l8r4LVV_jsQ


ASUS points out that voltage measurement point matters, so what seems like an overvolt measured at the VRM or even socket may be fine at the actual IOD. Which is all very true.

However, they're evidently still using overly aggressive load-line calibration to counter voltage drop. You can see this a bit over five minutes in where they load the processor. SoC die voltage stays the same, despite the VRM outputting more. This can be desirable for some kinds of OCing, but should not be the default behavior. There is supposed to be voltage droop when applying a load to compensate for the corresponding and inevitable spike when that load ceases (switching power supplies aren't instantaneous, and the SoC VRMs on most boards are only one or two phases). These oscillations aren't visible on a multimeter, but they exist, and would be clearly visible on a sufficiently fast oscilloscope.

Manufacturers are going to push the limits of whatever specs they are handed in the name of compatibility and competition. Shouldn't be that big of a deal as long as the specification is actually sensible. The new 1.3v limit should be fine, even if board makers take that to mean 1.3v at the actual IO die during peak loads. It's more than I'd personally run, but if it kills a part in normal use, that part was probably such a weak sample that it could reasonably be considered defective, and given the crapstorm over asinine warranty technicalities, I can't imagine anyone not honoring the warranty in such an eventuality.
 

Interesting take

The only part that I think is outright in error is the idea that one is less likely to be GPU limited with a higher-end GPU. It's true that most people buying RTX 4090s aren't playing at 1080p, but an RTX 4090 is much more likely to be CPU limited at 1440p or 4k than a mainstream GPU is at lower resolutions. The 7800X3D is definitely faster than the 5800X3D, or anything short of a 13900K, in some fairly common gaming scenarios, just not the ones tested here, and not where one is maxing out the eye candy to fully load an RTX 4090 at all times. Elite: Dangerous Odyssey is a good example of a CPU/platform limited title. In high CZs I'm CPU/memory limited as often as not with my very well tuned 5800X3D + DDR4-3800 CL15 + RTX 4090 setup at 5k well-past-ultra settings. A 7800X3D + DDR5-6200 CL30 setup with a similarly clocked RTX 4090 can finally make those scenarios almost completely GPU limited (still have a more detailed comparison on my to do list). Same goes for plenty of less well optimized less-than-AAA titles.

The rest of the points are contextual. I ordered a 7800X3D for my brother's system on launch day, and the platform price differential was essentially non-existent. If anything, with equivalent hardware, the AM5 setup was cheaper, except for the CPU itself. This guy in the video is using a budget AM4 board and budget DDR4, which isn't a bad idea for a 5800X3D, but he's also using comparatively higher-end hardware for the AM5 setup. CL30 DDR5-6000 using Hynix M or A-die on AM5 (which can be had for $120 currently) is the equivalent of fairly well binned Samsung B-die on AM4...and a good B-die kit is still $150+. You can just as easily cheap out on memory for AM5 and the 7800X3D will tolerate that fairly well, not that I'd recommend it for an otherwise high-end system.

What's neglected is the potential upgrade path...there is none for a 5800X3D system. That is the fastest gaming processor that is ever likely to exist for AM4. AM5 will support at least another full generation of CPUs, possibly more.

He does have a point about ITX motherboards, which one of the reasons why I hesitate to recommend using them for AM5. They are overpriced and under featured, or have poor layouts. The AM4 selection is much more mature. However, if you move up to mATX--and if you're putting an air cooled RTX 4090 in your system, you don't need to increase total case volume that much to accommodate an mATX board--the prices and feature sets are much more reasonable. If one isn't going to overclock, there are competent $125 AM5 mATX boards and even if one is going to overclock (via curve optimizer, which does help even the 7800X3D) then one is looking at a $160 entry point. Personally, at the time I built the system I've mentioned, the best all-round option was a $180 Gigabyte B650M Aorus Elite AX.

Some of the complaints on cost also seem bizarre in relation to other component choices (though these were probably sponsored). That SIlverstone PSU, for example, is total overkill. An entire 7800X3D system, sans GPU will only consume ~100w, maybe 150w if you deliberately push it. The default power limit for an RTX 4090 is 450w, and it's efficiency sweet spot is down around 350w. I would personally be entirely comfortable running a 7800X3D + RTX 4090 on a quality 650w PSU, though would probably recommend more for upgrade and overclocking headroom.

Overall, it's more of a critique of the sorry state of AM5 on mITX than anything else. That the cheapest of these boards are $280+ is pretty inexcusable.

Anyway, for almost any brand new build, I'd recommend AM5 over AM4 at this point. DDR4 is not less expensive than DDR5 any more. Boards are also not terribly different in price, except for mITX, and even then I'm not sure the savings is worth tying one's self to the old platform.
 
...

Anyway, for almost any brand new build, I'd recommend AM5 over AM4 at this point. DDR4 is not less expensive than DDR5 any more. Boards are also not terribly different in price, except for mITX, and even then I'm not sure the savings is worth tying one's self to the old platform.

Agreed

My take on the issue is more based on the question of considering whether or not there's a case to be made to upgrade from a solid AM4/3090 build to AM5/4090.

At this point I can't see making the move as a good value considering the cost/benefit and the risk/reward equations.

Side note - If I were to upgrade to a 4090 I'd definitely look at this:

https://www.thermal-grizzly.com/en/products/601-wireview-en
 
My take on the issue is more based on the question of considering whether or not there's a case to be made to upgrade from a solid AM4/3090 build to AM5/4090.

At this point I can't see making the move as a good value considering the cost/benefit and the risk/reward equations.

Depends on the games one plays, but a 3090 to 4090 is a much larger upgrade than a 5800X3D to a 7800X3D. Comparing the two CPUs there are very few scenarios where the difference justifies an entire platform update. However, the GPU is a drop in replacement--if the system can handle a 3090 it will handle a 4090, as long as the latter physically fits--and the difference between these GPU generations is night and day in some games at some settings.

Side note - If I were to upgrade to a 4090 I'd definitely look at this:

https://www.thermal-grizzly.com/en/products/601-wireview-en

It's a neat gadget, but I've tested the software readings against a clamp ammeter on my parts and it's close enough that just using HWiNFO is fine for most scenarios. In testbench situations where Windows isn't running or I can't afford to have another program up, I can just break out the ammeter.

I'm also wary of introducing extra potential points of failure. Mo matter how high quality an adapter, it's still more stuff between the power source and the component using it.
 
Some vaguely controlled testing between my main 5800X3D system and the 7800X3D I built and tuned for my brother in the same large botanical settlement, high CZ, at 1080p Ultra (never even close to GPU limited on the overclocked RTX 4090s in either system), identical power profile, driver, and game configurations.

5800X3D, -30 all-core CO (4450MHz game clocks), 1900FCLK, DDR4-3800 CL15:
o92aZwh.png


7800X3D, -28 to -35 per-core CO (5050MHz game clocks), 2133FCLK, DDR5-6000 CL28:
zNPN72H.png


Despite using the same high CZ, the time of day was slightly different and the conflicts played out differently, making an exact comparison difficult. However, these figures are largely representative...with similar effort put into tuning each platform, the 7800X3D setup is proving to be between 20% and 50% faster in non-GPU limited scenarios in Odyssey. There are still some serious stutters, even on the 7800X3D, when dropships spawn in, but the frame rates are significantly higher across the board, even in those dips.
 
Back
Top Bottom