We're all now expecting/hoping that this issue is retired on Wednesday. That advance warning finally prompted me to pull the finger out and do what I've been thinking about for a few weeks: running Process Monitor to see wth is going on during a stuttering episode. I managed to capture one, and I reckon I learned a little bit more about what is happening. Basically, it's consistent with a lot of what I've seen proposed here and elsewhere.
NB: I think that what I'm sharing here is on the "safe" side of the line with regard to breaching the EULA (reverse-engineering and so on), but if any moderators disagree then please remove this post.
In short: the game process makes a series of TCP connections to an AWS server and downloads modest-sized chunks of data each time. For the stuttering episode I captured, the total duration was around 15 seconds, and it first became noticeable to me about 2 or 3 seconds in. During that 15-second episode, the game made 61(!) connections to the same AWS server, which is around the same as the total number of TCP connections it created in the previous 10 minutes (which were made to a few different AWS servers). Each time, it only fetched a few tens of kB (typically 20 or 30 packets), and only a few kB were sent to the server.
I can't see anything in the few tenths of a second preceding the start of the problem that would suggest that it was triggered from the server side, though that's not impossible. Certainly, none of the existing open TCP connections were used to transfer the data. It's all consistent with the game client deciding it wanted some updated data, and then fetching that, and then (crucially) doing something with that data in a way that stalls the game engine needlessly. (I'm saying "needlessly" because I simply can't conceive of any data which the game only needs to fetch periodically - minutes apart - and which yet somehow must be processed between displayed frames.)