Page 4 of 4 FirstFirst 1234
Results 46 to 50 of 50

Thread: New PC for ED - Expert Advice Required!

  1. #46
    I am in the USA. In 2015 I was able to build and purchase (on eBay) a custom gaming PC with a quad-core i5 CPU, 16 GB RAM and a 500GB HDD for $600 USD. I already had a Windows 7 Ultimate license, KB, mouse and 1080 monitor. So far, the only upgrade I have made is a NVidea GTX1080 GPU for $695 USD.

    You can go on eBay and search for "custom gaming PC". Filter your search by "highest rated sellers". Most of these will allow you to customize the advertised PC build (scroll down to bottom of the ad) and have it created as a new eBay item you can purchase right away if desired. Here in the USA I was able to avoid both sales tax and shipping fees, further reducing the purchase price. Mine came assembled with driver disks included, no OS, no software installed and no support but, after a day spent installing the OS, drivers and various software mine has been absolutely trouble free.

  2. #47
    Originally Posted by Morbad View Post (Source)
    Nothing else needs any favors. CPU and GPU temperatures are essentially always the limiting factors when pushing performance. Everything else in most reasonable well balanced and ventilated setups is often going to have at least 20C of headroom.



    You'll want positive pressure if your intakes are filtered.

    Several of my systems have no exhaust fans at all. I just open them up in the area I want air to leave, and intake air through what needs to be coldest.



    Reverse also applies. I want the coldest air possible cooling the hottest running components.



    Optimal temperature for a mechanical HDD is 35-45C. Below 30C failure rates increase rapidly. This likely has to do with the viscosity of the lubricants used...too cold, viscosity climbs, which means more wear.

    https://static.googleusercontent.com...k_failures.pdf -- section 3.4.

    Negative pressure cools better, positive pressure reduces dust. No PC fan filter is going to block very small particles so you're always going to have to clean out the system eventually so where you're looking at high performance systems I'd say negative pressure systems win. Positive pressure systems also suffer more from static air pockets.

    Have you read anything about IC and other common PC component failure rates vs temperature? It has been a while since I have so am not about to start misquoting from memory.

    The point I was making re the duct experiment is if you drawing cooler air into the case by for example directing the hot air straight out, you're reducing your system temps.

    No mechanical drive is going to be running below 30 degrees unless your machine is in the fridge or the datacentre has properly messed up the AC control systems, you'll see failure rates start increasing at about 45 degrees I think and increase massively above 55 which isn't that uncommon in systems where hot air is blowing on the drives or the case has poor airflow.

  3. #48
    Originally Posted by Dural View Post (Source)
    Negative pressure cools better, positive pressure reduces dust. No PC fan filter is going to block very small particles so you're always going to have to clean out the system eventually so where you're looking at high performance systems I'd say negative pressure systems win. Positive pressure systems also suffer more from static air pockets.
    Filters, and making sure that as much intake air as possible has to go through them, are huge boons to maintenance. It's not practical to keep out all dust, as you note, but you can keep out most of it; filters will significantly extend the time between full system cleanings. Well implemented filters can also be cleaned without moving or shutting anything down either.

    Originally Posted by Dural View Post (Source)
    Have you read anything about IC and other common PC component failure rates vs temperature? It has been a while since I have so am not about to start misquoting from memory.
    The rough rule of thumb for both electromigration based failures (derived from Black's equation), which will eventually kill essentially every IC that doesn't physically break or corrode first, and the speed chemical reactions, which are important for capacitors and anything exposed to oxygen, is that for every 10C higher the temperature, you cut lifespan in half.

    However, with components rated for several years of operation at temps ranging from 85-125C, going from 35C to 40 or 45C is meaningless. Those 125C mosfets rated to deliver three times the current even my peak overclock can pull, and the 105C rated capacitors next to them, are likely to last a very long time at 50-60C. Same goes for chipsets (all rated for over 80C) and DRAM (85C).

    In realistic use, only very hot and heavily overclocked and/or overvolted parts are likely to fail from electromigration, oxidation, or other non-mechanical stresses. Thermal cycling will probably spell the end of a critical solder joint or BGA ball before then. This is one of the reasons I scoff at some people's avoidance of used mining hardware, unless there is evidence of poorly cooled VRMs. Most mining GPUs will have a few dozen thermal cycles on them over the span of their profitable mining lifetimes, rather than the thousands that would be on a gaming part of similar age. Any place that electricity isn't free is also likely to have miners running anything but cutting edge hardware underclocked and undervolted, because peak performance-per-watt is always well below stock.

    Anyway, most of my CPUs are rated for junction temperatures of 90-100C at half the current I can put through them peak (I tend to overclock the snot out of them and run them hard) and I can come very close to these limits, so every single degree counts.

    My modern GPUs tend to be cooler, but they also have to be to achieve the clocks I want. GPUs in general tend to have realistic peak clocks more limited by temperature than CPUs (though perhaps only seem this way due to lower thermal density which makes them more responsive to cooling) and my NVIDIA GPUs will automatically adjust their voltage/frequency curves as temperatures change. This 1080 Ti, which has it's original (though far better than reference) air cooler on it, cannot be allowed past 68C if I want it to maintain the clock speeds I have set, nor will it remain stable if I adjust the frequency curves further to compensate. Since power consumption actually goes up with temperature, I start hitting power limits in some tasks if it gets any warmer as well. So, again, every C counts here.

    I'd trade 20C hotter on everything else inside the case, for another 5C less on my CPU and GPU, if it came to it...though if the situation were that dire, I'd probably just be watercooling everything and mounting the radiators externally. As it stands, on my systems that have used radiators as intakes, the trade off was much less severe.

    Originally Posted by Dural View Post (Source)
    No mechanical drive is going to be running below 30 degrees unless your machine is in the fridge or the datacentre has properly messed up the AC control systems, you'll see failure rates start increasing at about 45 degrees I think and increase massively above 55 which isn't that uncommon in systems where hot air is blowing on the drives or the case has poor airflow.
    Most mechanical drives when given any airflow, will be with about 5-10C of ambient. They don't produce much heat and don't need much cooling.

    It's 30C in the room with my oldest system (the only one that still has any 3.5" drives) and the three mechanical HDDs in the lower front cage (which never sleep or spindown) are all in the 35C ballpark. In my main system, which has an array of 2.5" WD Black laptop drives in an IcyDock cage that I removed the fan from, ambients are similar, and the drives, despite being in active use and almost touching eachother, with zero airflow, are 41-42C.

  4. #49
    Originally Posted by Morbad View Post (Source)
    Filters, and making sure that as much intake air as possible has to go through them, are huge boons to maintenance. It's not practical to keep out all dust, as you note, but you can keep out most of it; filters will significantly extend the time between full system cleanings. Well implemented filters can also be cleaned without moving or shutting anything down either.



    The rough rule of thumb for both electromigration based failures (derived from Black's equation), which will eventually kill essentially every IC that doesn't physically break or corrode first, and the speed chemical reactions, which are important for capacitors and anything exposed to oxygen, is that for every 10C higher the temperature, you cut lifespan in half.

    However, with components rated for several years of operation at temps ranging from 85-125C, going from 35C to 40 or 45C is meaningless. Those 125C mosfets rated to deliver three times the current even my peak overclock can pull, and the 105C rated capacitors next to them, are likely to last a very long time at 50-60C. Same goes for chipsets (all rated for over 80C) and DRAM (85C).

    In realistic use, only very hot and heavily overclocked and/or overvolted parts are likely to fail from electromigration, oxidation, or other non-mechanical stresses. Thermal cycling will probably spell the end of a critical solder joint or BGA ball before then. This is one of the reasons I scoff at some people's avoidance of used mining hardware, unless there is evidence of poorly cooled VRMs. Most mining GPUs will have a few dozen thermal cycles on them over the span of their profitable mining lifetimes, rather than the thousands that would be on a gaming part of similar age. Any place that electricity isn't free is also likely to have miners running anything but cutting edge hardware underclocked and undervolted, because peak performance-per-watt is always well below stock.

    Anyway, most of my CPUs are rated for junction temperatures of 90-100C at half the current I can put through them peak (I tend to overclock the snot out of them and run them hard) and I can come very close to these limits, so every single degree counts.

    My modern GPUs tend to be cooler, but they also have to be to achieve the clocks I want. GPUs in general tend to have realistic peak clocks more limited by temperature than CPUs (though perhaps only seem this way due to lower thermal density which makes them more responsive to cooling) and my NVIDIA GPUs will automatically adjust their voltage/frequency curves as temperatures change. This 1080 Ti, which has it's original (though far better than reference) air cooler on it, cannot be allowed past 68C if I want it to maintain the clock speeds I have set, nor will it remain stable if I adjust the frequency curves further to compensate. Since power consumption actually goes up with temperature, I start hitting power limits in some tasks if it gets any warmer as well. So, again, every C counts here.

    I'd trade 20C hotter on everything else inside the case, for another 5C less on my CPU and GPU, if it came to it...though if the situation were that dire, I'd probably just be watercooling everything and mounting the radiators externally. As it stands, on my systems that have used radiators as intakes, the trade off was much less severe.



    Most mechanical drives when given any airflow, will be with about 5-10C of ambient. They don't produce much heat and don't need much cooling.

    It's 30C in the room with my oldest system (the only one that still has any 3.5" drives) and the three mechanical HDDs in the lower front cage (which never sleep or spindown) are all in the 35C ballpark. In my main system, which has an array of 2.5" WD Black laptop drives in an IcyDock cage that I removed the fan from, ambients are similar, and the drives, despite being in active use and almost touching eachother, with zero airflow, are 41-42C.
    OK, so the only reason to use a positive pressure setup is if you're worried about dusting more frequently and although one may be operating the components within their design parameters one may be accelerating the demise of the system because of other factors if the temperature variation isn't minimised. I'm sure you personally have well designed and set up systems but in principle I favour negative pressure setups because they cool more effectively.

    Drives run quite a lot hotter than that in a lot of systems. It has been a while since I did hands on sys admin work and I don't think the infrastructure guys will thank me for asking them to pull the drive temps from a load of workstations but it really isn't that uncommon to see 45 or 50 degrees in mass produced systems. I've seen a few home builds, especially media servers where drives have failed very prematurely because of temperature related issues so again in principle, platter drives hate getting too hot although I'm sure yours are quite happy by the sounds of things.

  5. #50
    Originally Posted by Para Handy View Post (Source)
    I suggest that if you can't see the difference you really need to make an appointment with an optician - and I am not being rude, just honest.
    Sadly even the best optician on the planet cannot help, thanks to Father Time. My eyes have natural AA on them too. I believe they're now at the equivalent of FXAA

Page 4 of 4 FirstFirst 1234