Hardware & Technical Hard drives V flash drives discuss

From IBM

https://www-03.ibm.com/systems/uk/storage/flash/flash-array/blog/time-to-ditch-spinning-disks/index.html?cm_mmc=Display_Outbrain-_-Systems_Systems%20-%20All%20Flash%20Storage-_-EP_GB-_-21721465_NativeAd%20Outbrain%20for%20IBM%20Systems&cm_mmca1=000016CS&cm_mmca2=10002817&cm_mmca4=21721465&cm_mmca5=46057643&cm_mmca6=08d04613-6a5c-4cf5-b2b7-899718acff99&cvosrc=display.Outbrain.NativeAd%20Spinning%20disks%20obsolete%20Blog%20Tab%201_Other_DesktopMobileTablet_1x1&cvo_campaign=000016CS&cvo_pid=21721465

As technology continues to evolve almost daily, businesses are looking for ways to become faster, more efficient and ultimately more profitable. Swapping outdated spinning disk storage for flash is a way of ticking all three boxes. The following are three reasons why flash storage is better than its traditional counterpart.
More than 2.5 exabytes of data are generated every day. And that figure is growing. Faced with this deluge of information, businesses are turning to more efficient and cost-effective ways to store, manage and access their data – and the smart ones are using flash. Below are three reasons why flash storage trumps more traditional spinning disk drives.
1. Flash is safer physical storage
With no moving parts, a simpler architecture and a smaller physical footprint, flash already has a head start over the more complicated and clunky spinning disk drives. These physical factors mean there is less operational burden as there is less that can go wrong.
Free from the need to physically write and read data, it’s little wonder flash performs better than its ageing counterpart. And while it might cost more upfront, businesses will save money when they factor in the lower total cost of ownership brought about by physical benefits such as lower power usage and higher return on investment from increased performance and greater versatility.
2. Flash delivers in crucial storage environments
Traditionally, modern enterprises were built on spinning disk storage. But that is changing. This year, enterprise spend on flash storage will surpass spinning disks in what will prove a tipping point in the preferred storage option for businesses.
That’s because flash delivers more in crucial storage environments. Whether it’s business-critical storage, grid-scale cloud storage or big data storage, there are flash storage arrays that offer top performance, greater acceleration and superior flexibility than spinning disks.
3. Flash makes easy what hard disks find hard
Businesses expect the highest possible level of service from their storage providers – and rightly so. Most providers run quality of service (QoS) – management, monitoring, and constant improvement of all aspects of its solution – as a core feature of their offering. And flash enables them to meet any QoS objectives that might be a part in their contract with customers.
It’s these factors, coupled with flash’s low latency and scalability that are attracting a new wave of startups to adopt the storage. On top of this, flash offers greater interoperability with software-defined storage (SDS) solutions than aging spinning disks – making it the storage format of choice for many modern businesses today and in the future.
 
NAND flash is still ten times the price per GB of magnetic storage, and until that gap narrows considerably further, mechanical hard drives will have a place.
 
Sure it does. They are trying to sell flash based storage solutions and despite flash's numerous advantages, it's still facing competition from magnetic media because of that cost differential.

Some segments are ready to ditch spinning disks, but the "all flash revolution" is still quite premature in many others.
 
I recall IBM pushing there AIX platform over SUN Boxes the Head of procurement received many golf trip invites
 
Is not only cost, but aging. AFAIK, in a high IO environment with large quantities of writes the SSD and Flash aging is faster and can suffer loss of data. Right now Im buying a Dedicated Server for MS SQL Server 2016 and my current IO volume is about 2000 IOPS. The best and logical to have more speed is to buy a full SSD server like the DELL 7400 for example, but with time the continues writing can make any of these disk to fail while in a SAS 15k environment is more reliable.

https://danielmiessler.com/blog/the-difference-between-ssd-and-flash-hard-drives/
 
Is not only cost, but aging. AFAIK, in a high IO environment with large quantities of writes the SSD and Flash aging is faster and can suffer loss of data.

Yeah, limited number of P/E cycles can be problematic for NAND storage. There is a reason I use arrays of mechanical drives for raw video capture/surveillance uses...in some cases I can fill them three or four times a day and that would burn through the P/E cycles on MLC or TLC NAND very quickly.

Still, even for most enterprise use, durability isn't that big an issue. Huge amounts of sequential writes are rare outside of backups or archival, which isn't done to the same drives frequently, and random accesses are so much faster on flash that the cost of replacing the drives frequently is probably worth it.

Right now Im buying a Dedicated Server for MS SQL Server 2016 and my current IO volume is about 2000 IOPS. The best and logical to have more speed is to buy a full SSD server like the DELL 7400 for example, but with time the continues writing can make any of these disk to fail while in a SAS 15k environment is more reliable.

A single SSD will likely handle the I/O volume of a modest SQL server and the number of 15k RPM drives you might need for comparable performance could well make them less cost effective, even if you have to replace drives 1/5th as often.
 
Yeah, limited number of P/E cycles can be problematic for NAND storage. There is a reason I use arrays of mechanical drives for raw video capture/surveillance uses...in some cases I can fill them three or four times a day and that would burn through the P/E cycles on MLC or TLC NAND very quickly.

Still, even for most enterprise use, durability isn't that big an issue. Huge amounts of sequential writes are rare outside of backups or archival, which isn't done to the same drives frequently, and random accesses are so much faster on flash that the cost of replacing the drives frequently is probably worth it.



A single SSD will likely handle the I/O volume of a modest SQL server and the number of 15k RPM drives you might need for comparable performance could well make them less cost effective, even if you have to replace drives 1/5th as often.

Agree, but in corporative environment with Dynamics AX 2012 R3 and using SSRS and SSAS data cubes, uff the writings are a lot!

Another proposal is to setup the server with (6) 960GB SSD SAS Read Intensive 12Gbps 512n 2.5in Hot-plug Drive, PX05SR,1 DWPD,1752 TBW these are expensive, but more realiable than standard SSDs
 
Last edited:
Spent over 5 years at Micron doing development and testing of SSDs.

Data integrity on SSDs (lack thereof), no direct overwrite, and the whole block/frame conundrum leaves a lot to be desired IMO. I will admit to having a small one as a boot drive for my windows gamer, but I have a large spinner that I put everything other than the OS on.

I'm not ready to trust them completely just yet.
 
I use SSD's on my desktops, but they automagically do incremental backups every night to a server running a redundant RAID array that can survive 2 simultaneous drive failures and still recover.

I have too much historical data to risk loss of my primary desktop.
 
Does anyone still backup to tape?

Personal or business? The companies I have worked for typically do tape backup. I have not had tape backup since the days of my SCSI 4mm DAT drive. Now I have small NAS that is used to back up the home computers (3) and the iPads and iPhones.
 
I've got an M.2 SSD that I adore. Significantly sped up my OS and the few choice games I have on it.

Only complaint I have is the incentive to keep the drive empty...
 
I've got an M.2 SSD that I adore. Significantly sped up my OS and the few choice games I have on it.

Only complaint I have is the incentive to keep the drive empty...

Leave some empty space on there, because the data already written has to be moved around. Read/write disturbs, and the fact that data will eventually degrade just from sitting in one spot too long. NAND is funny like that.
 
Leave some empty space on there, because the data already written has to be moved around. Read/write disturbs, and the fact that data will eventually degrade just from sitting in one spot too long. NAND is funny like that.

That's exactly why I leave the drive half empty! 256gb was probably too small.
 
Leave some empty space on there, because the data already written has to be moved around. Read/write disturbs, and the fact that data will eventually degrade just from sitting in one spot too long. NAND is funny like that.

Isn't that why drives come with over-provisioning built in? Not sure there is much value to be gained by a user self imposing much more to that.

Or dodge the issue to a large degree by buying Optane. Just watch out for sticker shock.
 
Isn't that why drives come with over-provisioning built in? Not sure there is much value to be gained by a user self imposing much more to that.

Or dodge the issue to a large degree by buying Optane. Just watch out for sticker shock.

Over-provisioning is required because NAND dies. Rather quickly, compared to other storage mediums. It is not the most stable data recording medium, but we managed to make it work. The firmware is a constant joke with all the twists and turns it has to go through attempting to keep your data safe.

And if you want to put your trust in a manufacturer who uses the most cost-effective (read cheap) way to meet product definitions, knock yourself out. Do you even know how much storage is dedicated to over-provisioning in your drive?
 
Leave some empty space on there, because the data already written has to be moved around. Read/write disturbs, and the fact that data will eventually degrade just from sitting in one spot too long. NAND is funny like that.

Most modern SSDs don't perform much differently when full than when empty. They tend to have significant free area already provisioned and controllers/firmware have come a long way.
 
Most modern SSDs don't perform much differently when full than when empty. They tend to have significant free area already provisioned and controllers/firmware have come a long way.

Did firmware for over 30 years in the storage industry, e.g. disk, tape, optical, and NAND storage. I've been up close and personal with the firmware of SSDs. Well except for the last year or so. Watching these new kids try stuffing C++ in a deterministic hard-time system because they can't use C/asm is a riot. And watching them attempt to debug is hilarious as well. :D
 
Back
Top Bottom