Hardware & Technical Zip ties to the rescue...refurbishing my backup system

Just finished reassembling my secondary (formerly primary) PC. Needed the CPU that was in it (an i7-5820K) for my HTPC as it's CPU (an i7-6800K) was starting to fail...third Broadwell-E I've been though, they seem pretty fragile. Grabbed an Xeon E5-1660 v3 (essentially identical to a 5960X, unlocked multiplier and all) off of Ebay for about $150 bucks, which was just a drop in replacement that seems to be working fine so far (still need to see what sort of OC I can get out of it).

Anyway, problems started a while back when one of the fans on my Gigabyte Aorus 1080 Ti started failing (sleeve bearing was wearing out and the fan was starting to seize) causing temperatures to rise. As the fan would fail completely in short order, I started looking at options for a replacement. The fans on the card's non-reference cooler are proprietary; there was no easy drop in replacement and I didn't feel like replacing all of the fans with different ones for marginal gains. So, I started looking around at aftermarket aircoolers, only to discover that they all had compatibility issues with the unusual layout of this non-reference board (I could have gotten around these by desoldering one of the stacked of the HDMI ports, but that seemed fairly extreme just to get a replacement air cooler running). I had already ruled out a custom watercooling loop for this part as I intend to replace it soon and the cost of a full cover block for this specific PCB was the most extreme of all options. It then dawned on me that I already had all the parts I needed to mount one of my CLCs (closed-loop-cooler, I normally have a few around for use on test benches, but typically use air coolers or custom loops on anything permanent) to this 1080 ti...an NZXT Kraken G12 that had been sitting in a box for years; piles of small VRM and memory sinks I had been accumulating for a quarter century; and a perfectly serviceable Corsair H55 that wasn't currently being used.

So, I get everything assembled--VRM and VRAM sinks epoxied on, block mounted, fans picked out, etc--annnnnnddddd hoses don't reach! Only fan placements where the radiator could actually reach was a bottom one, which is a no go because the block/pump cannot be the highest point in the loop, or that's were air will pool and either crap up flow or damage the pump.

Since it's the middle of pandemic and most of NYS is locked down pretty hard, meaning I can't just go to the hardware store and buy longer screws, and I'm getting impatient with a perfectly good 1080 Ti sitting here that I can't use, it was plan B time.

Enter the zip-ties!

byxD3FC.jpg


tfaIUOV.jpg


z9UVqVt.jpg


It's a bit of ghetto-rig, but there are some side benefits to suspending the radiator like this. Namely, I can have more total air intake than I was originally planning on, which is very good for a pure positive pressure setup. Cooling performance was thusly improved, with no real downsides. In my case, it will also mean less corrosion in the H55 loop because I did refill it at some point in the past with deionized water, rather than a coolant with corrosion inhibitors more suited to mixed metals (all these CLCs are copper blocks and aluminum radiators, and if given a path to ground will turn into an anode-cathode and destroy themselves in short order, without inhibitors in the coolant), which diluted the corrosion protection it came with.

Right now, in a 23C room and a closed case with all filters back in place, the tower is quieter than my previous arangement. This 1080 Ti, overclocked to 2050MHz core and 5940MT/s on the GDDR5X, only reaches 45C in Furmark while right up against the edge of it's board power limit.

IkRhBQC.jpg


This is a solid 25C colder than the massive three-slot triple-fan cooler it came with, with less noise, at a 25MHz higher core clock. It's also about 10-15C better than I was expecting out of this H55, which is really not that good even for a CLC.

Overall, very happy with how things are working so far. Will get Elite: Dangerous back up and running later this week after I dial in a CPU OC.
 
Not so sure this Xeon is going to work out for my main system. It's already pulling ~220w during stress testing at a mild 3.7GHz. Probably not going to be able to cool it on air, even with a pair of 2500rpm fans on the D15. Might have to save this one for the HTPC and swap my 5820K back in.
 

Robert Maynard

Volunteer Moderator
There are 140mm fans available that have a 120mm mounting interface - might something like that help, a bit at least?
 
There are 140mm fans available that have a 120mm mounting interface - might something like that help, a bit at least?

All the fan mounts on this case will take 120 or 140mm fans, so the size of the fan/radiator wasn't an issue.

Problem is the only two viable locations were in the front of the case and there is no way for the radiator to reach without putting undue strain on the hoses, unless I run a duct/shroud to allow the radiator to be mounted further back. This wasn't practical with what I had on hand, so I had to suspend the radiator inside the case.
 
Well, this Xeon sample turned out to not OC particularly well, so I'll save it for my HTPC.

Put my old 5820K back in and used the opportunity to test another only of my old CLCs that I refurbed. Fortunately, this Cooler Master Nepton 280L, which is over five years old at this point, just needed a good scrubbing and the coolant level topped off with distilled water. Despite it's age its still one of the best CLCs out there, mostly because it's got a killer pump (for a CLC). Combined with wide tubing and a 280mm radiator with a high fin density, it's pretty capable. Stock fans were busted, but that's ok because these Arctic P14s are way better.

Anyway, compared to the NH-D15S, this revamped Nepton keeps my 5820K at 4.2GHz a full 10C cooler with fans spinning 900rpm slower, and is way quieter under heavy load. Idle is a bit of a wash as a small amount of pump noise is present. Probably have another 100MHz or so of CPU headroom now, if I want to push it.

Was a tight squeeze, but at least I didn't need any additional cable ties this time...

mJUuVtJ.jpg


Also got around to running some GPU benchmarks. This is Unigine Superposition on the CLC cooled 1080 Ti:

ANGJ8fm.png


sSYXpre.png


Never got above 43C in the 4k test or 41C in the 1080p test, in a 25C room.

Not much power limit left, but I might be able to squeeze another 25MHz or so out of the core and another multiplier out of the GDDR5X.
 
Should have put this card under real water sooner...thing does 2075MHz core with 1.05v. Couldn't get any more out of the memory, however.

Real full cover block, shunt mod, and a vmem mod could probably take this thing to 2.1GHz+ core, and 6100+ on the memory. Oh well, too soon to the next generation to fuss over this one; my cheap-o CLC will have to do.

Superposition with a stable 2075/5940:
5PHvvBT.png
 
Last edited:

Robert Maynard

Volunteer Moderator
I've been folding for a while and initially had some issues with CPU folding (it occasionally crashed, frustratingly). I traced it to a poor memory overclock - then used 1usmus' DRAM calculator and applied the fast preset - all four sticks have been rock solid at 3,600MHz / CL16 since.
 
I've been folding for a while and initially had some issues with CPU folding (it occasionally crashed, frustratingly). I traced it to a poor memory overclock - then used 1usmus' DRAM calculator and applied the fast preset - all four sticks have been rock solid at 3,600MHz / CL16 since.

The Xeon I was testing seems to be all-round weaker than my 5820K sample, as far as clocks go. Same memory requires a lower uncore clock, and looser timings to be stable. Main issue was the CPU clock itself...everything was fine at 3.9GHz, but 4GHz would crap out immediately in stress tests, irrespective of core and input voltage.

I did try to use 1usmus' DRAM calculator with my Ryzen 3900X...but the newer 16Gb density Samsung M-die ICs on my 32GiB OEM DIMMs aren't in the calculator so I had to profile them from scratch. Took about three weeks to dial in the timings.

For the memory on my graphics card, I run eight instances of G80 Memtest (a CUDA port of Memtest86) while the card is as hot as I can get it.
 
Ran into a few minor issues with the new cooling setup :

- Memory no longer has as much air flow as it previously did due to losing the the draft from the tower cooler and I was already running the memory at the edge of stability, so I started seeing intermittent errors during combined stress testing. Resolving this was as easy as increasing the tRTP timing from 3 to 4 and reducing vDIMM by 20mV, which was the final performance tweak I implemented to the memory when fine tuning timings a while back. Performance loss from this reversion is utterly imperceptible and amounts to maybe a tenth of a percent in some memory benchmarks, so was no issue at all, other than actually identifying the source of the problem.

- Because the CLC on the CPU cannot intake cool outside air (I have neither the filters to put in that area, nor fans with enough pressure to handle the added restriction of filters on top of the radiator without sounding like a tornado) CPU temperatures are influenced a fair degree by GPU load. I was hoping to get enough headroom for a another 100MHz out of the CPU, but stress testing the CPU and GPU simultaneously (causing the latter to dump ~300w of heat inside the case, into air the CPU cooler ingests) gets hot enough at my previous 4.2GHz OC to negate most of the additional headroom.

- Due to the temperature dependent nature of NVIDIA's frequency/voltage curves, I had to make a small adjustment to the GPU OC to prevent the card from going over 2088MHz core when very cold or 2075MHz when over 43C as these proved unstable in 3DMark Time Spy (probably one of the best stress tests for identifying GPU, rather than video memory, instability) at 1.05v.

Anyway, system now seems to be able to handle any combination of loads I can throw at it without any signs of instability. Overall performance is only slightly better than before, due to an extra 50-63MHz on the GPU, but the system is both quieter (though still far from silent) and sees less dust bypass the intake filters, relative to the airflow arrangement I had before replacing the CPU and GPU heatsinks with old CLCs.

Time to get back to procrastinating gaming I suppose.
 
Couldn't help tinkering a bit more...

Already having established that the limiting factor with CPU temps was removing heat from the water, I decided to see if I could eliminate the annoying pump noise without harming temperatures. This Nepton has a silly strong pump for a CLC; it's specified to run at ~2300rpm at full PWM duty (or straight 12v), but since I diluted the coolant (and thus lowered it's viscosity) when I topped off the water level, mine runs at about 2500rpm. At this speed, it produces a subtle, but annoying high-piched whine. So, I ended up turning the pump PWM cycle down in my motherboard's firmware until I could no longer make out the pump noise above any other system noises. This turned out to be around 60% PWM, which gives me about 1800 actual rpm on the pump.

I wasn't expecting any significant change in temperature as flow rate was almost certainly not a limiting factor, but to my surprise temperatures actually improved slightly...a 1-2C improvement, with a slightly lower gap between hottest and coldest cores (9C to 7-8C, at peak load) than before.

I suspect that this pump was intended for a 50/50 water/glycol mix and the lower viscosity of the current coolant mix (probably closer to 60/40 or 70/30 now), plus higher resulting rpm of the full speed pump, was causing some degree of cavitation, either at the impeller itself, and/or at the waterblock fins, disrupting effective flow rate or thermal transfer.

In the end, I've got the system working well with intake fans at 7 volts and the CPU pump at 600rpm lower than before. These setting will run heavy combined CPU and GPU loads, while also being quiet enough to listen to music or watch a film on my speakers without having to jack up the volume to drown out any system noise.
 
Tested the CPU block pump again after dislodging some air bubbles (by picking up the entire case while the system and it now seems to scale in performance with pump rpm...difference is maybe 2-3C from 1800rpm to 2500rpm, but there is an improvement.

Originally had the CPU pump plugged into one of the system fan headers; the temperature it was monitoring was less than useful for controlling pump rpm. I ended up moving a few cables to different headers so I could get the CPU pump back on CPU_OPT so it could be controlled by the CPU temperature. Set it to stay at 50% PWM until 50C then slowly ramp to 73% at 80C, after which it will jump to 100%. This lets me keep the pump inaudible in any real-world use, while still being able to get full performance out of it in stress tests.

I also turned down the fans cooling the GPU. One Noctua 92mm 2k rpm fan is over the GPU VRM and another Noctua 120mm 2k rpm fan is on the GPU radiator...2k rpm is not loud on these fans (they are really very good fans), but 1800rpm is silent (in a closed case) and still cools more than enough to maintain stability at the clocks I'm using, so I used PWM slopes to keep them around 1750rpm at low loads, but allowing them to ramp up to full speed if ambients get unusually warm. Couldn't use the same slope for each as they are tied to different temperature sensors on the motherboard, but I think I have general temperature trends figured out.

One final issue with the cooling setup are the fans I have on the CPU radiator. These are non-PWM 3-pin fans with FDB bearings that I had connected to a 4-pin molex to PWM splitter. I was hoping that the fan headers on this motherboard could also control fan speed on 3-pin fans by manipulating fan voltage rather than requiring PWM, but after connecting the fans directly to the motherboard, this seems to not be the case. I'm also not convinced of the bearing longevity in a horizontal location...FDB is essentially just an improved sleeve bearing and despite advertising material, I've never had them last as long in horizontal orientations as vertical.

I noticed that Arctic has a 4-pin PWM version of these fans that also come with higher-end ball bearings: https://www.arctic.ac/us_en/p14-pwm-pst-co.html

These could be just what I need, but even good ball bearings tend to have unpleasant acoustic profiles at lower fan speeds that dBA or sone measurements won't reveal. Was trying to avoid buying anything new for this older setup, but I think I'll give these fans a go.
 
Just found my new P14 PWM PST COs (say that five times fast) on my porch. Spun them up with my blower just to check bearing noise and to my surprise there was almost none, certainly not any more than the standard P14's with their FDBs. These may have the quietest ball bearings I've ever heard.

I haven't installed them yet, but my initial impression is very positive...quality is astounding for 12 dollar fans.

Will swap them in later tonight and see how the PWM functionality works.
 
New fans are working well...was hoping for slightly more RPM out of them as they are taking power directly from the PSU rather than the the motherboard's fan header, but it seems these samples max out a little below the FDB ones I had on their...possibly due to using PWM, or just manufacturing variances. Regardless, it's a tiny difference and not thermally relevant.

As expected from initial tests, the bearings on these fans are completely inaudible in actual use. Sound profile of the entire system is much improved from where I started, which is surprising since these a old AIOs aren't at all competitive with a full custom loop.

Got Elite: Dangerous back up and running without issue...though I may have to adjust the order I have my controllers attached to my USB hub, which is always annoying as it mandates a re-calibration of the controls. New drivers, new version of OBS, reverting to pre-mitigation microcode, and getting that 50-75MHz more out of the GPU is a small, but noticeable performance increase. Helps keep me out of the sub-60 fps range in icy RES and on planet surfaces at 4k with well beyond ultra settings.

fOP1C1A.jpg


Only four cables going into the back of the system now. This plus new felt feet on the case make it easy to slide out and work on.

xs13CgM.png


One system down, five or six more to go.
 
The Corsair H55 that was on my 1080 Ti just died in a rather dramatic fashion.

Pump evidently failed, allowing the water in the block to boil (GPU is fully loaded 24/7 as I mine on it in the fall/winter when not gaming), which ruptured one of the hoses...I heard a violent hiss, smelled hot coolant, and immediately knew it was one of the AIO's I had refurbished. Fortunately I was able to flip off the power supply within a few seconds and I suspect all my parts are fine. Just finished pulling the card out, and am seeing if I can adapt one of my other coolers to it now.

Good thing I was in the middle of arguing about combat logging (ironically enough), or I might not have missed it and lost my GPU and/or PSU.

Edit: fins on the cold plate weren't clogged, so I know it wasn't the coolant mix I filled it with that was the source of the issue. Hard to tell if the pump actually stopped (bearing felt a little sketchy, but it still spun), or if it was simply a combination of stress on the tubing and protracted high-temperatures that caused the rupture.
 
Last edited:
Card looks fine. I dismantled and washed the whole thing, then took some time to underfill the GDDR5X ICs and the GPU package. Going to need two full days to set (I'm using some budget stuff made for ghetto reflow work and it cannot be quick cured by elevating temperature...I've tried), but I figure I may as well get that out of the way before mounting one of my other Asetek coolers to it.

Problem with my other CLCs is that the tubes are shorter, which may be a problem for placing the radiator. I may have to build a stand/brace out of a coat hangar or something instead of trying to suspend it further forward in the case with zip ties.

Thought about putting a proper GPU block on it and connecting it to a real loop, but as I'm about to retire this system, I'd rather save the parts for the 6800XT I hope to get soon.
 
Problem with my other CLCs is that the tubes are shorter, which may be a problem for placing the radiator. I may have to build a stand/brace out of a coat hangar or something instead of trying to suspend it further forward in the case with zip ties.

If you need to make a solid part, might I suggest pool putty? It's like 2 part clay.

It dries waterproof, is quite rugged, and you can squish it into any shape. Compatible with plastic and metal. Sticks fairly firmly onto matt surfaces.

Been using it for respirator flanges, faceplate seals, etc for months now. Not sure about high temperature stability though.
 
If you need to make a solid part, might I suggest pool putty? It's like 2 part clay.

It dries waterproof, is quite rugged, and you can squish it into any shape. Compatible with plastic and metal. Sticks fairly firmly onto matt surfaces.

Been using it for respirator flanges, faceplate seals, etc for months now. Not sure about high temperature stability though.

Oh, this is just to elevate a cheap alumimum closed loop cooler's radiator above the waterblock (so the pump doesn't ingest air that is trapped in the loop) when it's tubing doesn't let me mount it to any of the fan emplacements of the case it's in.

It won't get very hot externally (unless something goes really wrong), but also isn't really worth any time consuming fabrication and needs to be easily reversible (so the card can be removed, cooler attached).

Pool putty, JB weld, or something similar would certainly work, but it's a bit too permanent for this application. Might not be a bad idea for securing my coat hangar stand to the bottom of the case though...
 
System is back up and running.

Ended up just using some copper wire to suspend the fan and radiator from the same spot I have the zip ties holding the PCI-E power cables. Due to shorter hoses on this CLC the radiator is about 5cm lower and 8-10cm further back than before. Tight fit, but it works fine.

I also have the block oriented over the GPU better, so that none of the die area is beyond the microfins on the cold plate. Because of this and better TIM this time around temps are about ~3C cooler than before.

Hopefully the hoses on this one don't rupture!
 
Back
Top Bottom