Software vs Hardware raid

wallacetan

Member
Joined
May 21, 2000
Messages
129
Reaction score
0
Last edited:

etherealism

Senior Member
Joined
Jul 17, 2006
Messages
2,258
Reaction score
0
If you value your data and if using 4x 3TB and above drives for RAID storage, you should avoid RAID 5/6 or any kind that does parity calculations.

Why RAID 5 stops working in 2009

When No Redundancy Is More Reliable – The Myth of Redundancy

RAID 1 (mirroring) is the only safe option if using 3TB and above hard drives.

i think u missed the point of the articles, which is the increasing risk of unrecoverable read errors as capacities rise.
theres no hard and fast rule that 3TB is the limit to sensible RAID protection.

Anyway in reply, nowadays there are alternative schemes like ZFS where u can use higher orders of parity than required to guard against such errors during rebuilding.
 

wallacetan

Member
Joined
May 21, 2000
Messages
129
Reaction score
0
i think u missed the point of the articles, which is the increasing risk of unrecoverable read errors as capacities rise.
theres no hard and fast rule that 3TB is the limit to sensible RAID protection.

Anyway in reply, nowadays there are alternative schemes like ZFS where u can use higher orders of parity than required to guard against such errors during rebuilding.

Why RAID 5 stops working in 2009
Reads fail SATA drives are commonly specified with an unrecoverable read error rate (URE) of 10^14. Which means that once every 100,000,000,000,000 bits, the disk will very politely tell you that, so sorry, but I really, truly can't read that sector back to you.

WD Red Hard Drive for NAS Drive Specification Sheet (PDF)
21rfl.jpg


When No Redundancy Is More Reliable – The Myth of Redundancy
What happens that scares us during a RAID 5 resilver operation is that an unrecoverable read error (URE) can occur. When it does the resilver operation halts and the array is left in a useless state – all data on the array is lost. On common SATA drives the rate of URE is 10^14, or once every twelve terabytes of read operations. That means that a six terabyte array being resilvered has a roughly fifty percent chance of hitting a URE and failing. Fifty percent chance of failure is insanely high. Imagine if your car had a fifty percent chance of the wheels falling off every time that you drove it. So with a small (by today’s standards) six terabyte RAID 5 array using 10^14 URE SATA drives, if we were to lose a single drive, we have only a fifty percent chance that the array will recover assuming the drive is replaced immediately.

Correct me if I am wrong, from what I have read on the statistics and calculations, If you use 4x 3TB in a RAID 5/6 array, replacing 1 drive in the RAID 5/6 array, and do a resilvering/rebuilding operation, you will most likely encounter a unrecoverable read error (URE).

Because when you resilver/rebuild a RAID 5/6 array or any redundant disk system (ZFS?) that uses parity bit calculations, you will have to read 4x3TB=12TB (more then 100,000,000,000,000 bits). And as specified by the hard disk manufacturer, you will most likely encounter a unrecoverable read error (URE).

When you resilver/rebuild 2x 3TB RAID 1 (mirroring) array, it only needs to read 3TB of data.

In addition, if it was a RAID 1 (mirror) array, if the resilver operation fails, you can still access most of the data on the failed drive.
If it was a RAID 5/6 array, you lose the entire array if your resilvering/rebuilding operation fails.
 
Last edited:

wallacetan

Member
Joined
May 21, 2000
Messages
129
Reaction score
0
What are the pros and cons of hardware raid vs software raid???

Which would give me the best choice of redundancy and performance, hardware or software raid?
I am thinking about using raid 0. As it seems to give me the speed of raid 0. As I understand things if 1 drive fails the raid setup will lost all my files until new HDD is added.
Will either sw or hw raid give me the option to move raid to new pc with data intact of something like mobo fails???

Some sources stated that Home user mostly use SW raid and company that have server using HW raid.

But for me as a home user which one should I go for???

oh ya i forget to add on this is that I am using the RAID 0 system for a boot drive.

Performance
In the past, when CPUs are slow and only single core/thread, it make sense to use hardware raid. Spinning (non-SSD) hard disks random access time and read/write speed have somewhat remain constant while the CPU speeds/cores/threads have increased tremendously.

The "dedicated processing power" to calculate the XOR parity bit for RAID 5/6 on the dedicated processor of the hardware RAID card is miniscule compared to just one of the many cores/threads of a modern CPU.

Also due to the slower hard disks relative to a single CPU core, the bottleneck is NOT the
software RAID5's XOR parity bit calculation, but the read/write operation on the hard disks.

Furthermore, most desktop's CPUs starts with 4 cores, dedicating 1 core just to do the software RAID 5 XOR parity bit calculation will not affect the desktop's performance in anyway. And I do not expect to see 100% CPU utilisation on that dedicated CPU core used by software RAID 5 XOR parity bit calculation.

Reliability
Enterprise hardware RAID card's "Battery Backed Cache" FUD and Myth.
Why not just use UPS for a system running software RAID?
Also do remember that all batteries needs to be replaced after 1-2 years.
Does that hardware RAID card model/parts/manufacturer exists and available after 3-5 years?

Recovery and Portability
With software RAID, you can always rebuild the array on another machine.

However, you will need to connect your hard disks to the same hardware RAID card.
In 3-5 years, what if your hardware RAID card failed?
And that model is no longer produced?
Or the manufacturer no longer exists?

Best case scenario, you can find a used same hardware RAID card on ebay.
But can you wait 1-2 weeks for the shipment?
No hardware RAID array to restore your backup for 1-2 weeks?
How useful is hardware RAID when it cannot restore the data for 1-2 weeks?

Conclusion
Software RAID: +Performance +Reliability +Recovery and Portability
Hardware RAID: -Performance -Reliability -Recovery and Portability
 

cscs3

Arch-Supremacy Member
Joined
Jun 4, 2000
Messages
21,676
Reaction score
115
Your comment is right on the spot for those hardware raid end user.
This is especially true when storage size get bigger. A few days to weeks of outage is mostly not possible.
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,391
Reaction score
1,180
thank you for sharing,
I'm using synology, I know its work in swr.that's why I said swr can be more flaxible. you can just migra to a new model,the raid can be upgraded without reformat the drive. create multiple volume.

hwr usually are direct attached storage, and outside its endclosure should have button to configure raid type. if not that mean its swr or it can only do 1 type of raid

Well you have only use Synology. I bet you have not use a H/W Raid card before ? I suppose here we need to understand what is H/W Raid and what is S/W Raid. S/W Raid have much of the Raid capabilities managed by either the kernel or driver modules running in an OS. That is your understanding I hope

But H/W Raid is not without S/W either. It's uncommon to have design of such complexity to run without S/W. The firmware itself is a software. Is anyone aware there is VM running some of the smart card that is used widely in this world ? Of course, you need electricity for it to function, that's when you plug the smart card into a slot where it gets booted and running. My point here is just because it's a H/W Raid, in terminology means taking over large part of the Raid functionality into the daughter board itself doesn't mean it's fixed or inflexible.

To configure the H/W Raid card, the most common approach is done during bootup process where the H/W card will be recognize in the bootup sequence and the H/W Raid card has the opportunity to present it's own menu during that split moment of time for the user to intervene by pressing a keyboard button to activate the menu after the POST sequence. Another way is the H/W Raid card offers interactivity via either system ports or memory ports where drivers written specially for each operating system. Using these drivers, software can be written to communicate with the daughter board to change settings, to configure arrays, take harddisk online or offline, perform raid migration and man more functionalities. Have you not manage one before, you are greatly mistaken on its capability.

I have no idea what button you are expecting, but if you ask me, I can build whatever button you want whether it's a H/W Raid or S/W Raid. What physical interface that the NAS enclosure offers has TOTALLY NOTHING to do with whether it's S/W or H/W Raid. It's an independent decision made by the manufacturer.

With an enterprise level H/W Raid, I am given hotpluggable feature, which most SATA interfaces on the motherboard don't offer. Please don't assume all SATA interfaces are hot pluggable. WIth this feature, I can slot in a harddisk into a new slot, it will be electrically recognized by the H/W RAID and will be presented on the management console. Then you have the option to initialize it, add in as spare into an existing array. Create a new array out of new hard disks, or perform online RAID migration from perhaps RAID 5 to RAID 6, RAID 1 to RAID 5 etc.

Please don't use what you saw in the consumer space as an indication to what H/W Raid offerings are. They are alot more capable in the enterprise industry because it's not what normal consumers are willing to pay for, and also most consumers are not at this level to demand for such features. Most can't even understand what is RAID.
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,391
Reaction score
1,180
Performance
In the past, when CPUs are slow and only single core/thread, it make sense to use hardware raid. Spinning (non-SSD) hard disks random access time and read/write speed have somewhat remain constant while the CPU speeds/cores/threads have increased tremendously.

The "dedicated processing power" to calculate the XOR parity bit for RAID 5/6 on the dedicated processor of the hardware RAID card is miniscule compared to just one of the many cores/threads of a modern CPU.

Also due to the slower hard disks relative to a single CPU core, the bottleneck is NOT the
software RAID5's XOR parity bit calculation, but the read/write operation on the hard disks.

Furthermore, most desktop's CPUs starts with 4 cores, dedicating 1 core just to do the software RAID 5 XOR parity bit calculation will not affect the desktop's performance in anyway. And I do not expect to see 100% CPU utilisation on that dedicated CPU core used by software RAID 5 XOR parity bit calculation.

Reliability
Enterprise hardware RAID card's "Battery Backed Cache" FUD and Myth.
Why not just use UPS for a system running software RAID?
Also do remember that all batteries needs to be replaced after 1-2 years.
Does that hardware RAID card model/parts/manufacturer exists and available after 3-5 years?

Recovery and Portability
With software RAID, you can always rebuild the array on another machine.

However, you will need to connect your hard disks to the same hardware RAID card.
In 3-5 years, what if your hardware RAID card failed?
And that model is no longer produced?
Or the manufacturer no longer exists?

Best case scenario, you can find a used same hardware RAID card on ebay.
But can you wait 1-2 weeks for the shipment?
No hardware RAID array to restore your backup for 1-2 weeks?
How useful is hardware RAID when it cannot restore the data for 1-2 weeks?

Conclusion
Software RAID: +Performance +Reliability +Recovery and Portability
Hardware RAID: -Performance -Reliability -Recovery and Portability

There is alot to tackle for this piece of writeup, when you made a negative conclusion against H/W Raid. May I acquire what is your experience with enterprise H/W Raid options, which specific model you have managed and where did you conclude that H/W manufacturer in the enterprise industry do not offers a piece of H/W replacement in no more than 5 years.

Next is just because you don't see 100% CPU utilization means the S/W Raid is a good enough option ? Add on: I didn't say S/W Raid is not a good option, but how does such conclude H/W Raid is not good ?

How do you exactly fix RAID operations on just one core or one processor, can you advice ? I have yet to come across CPU affinity for RAID operations or disk related operations on any kernel available, perhaps you can advise ?

About battery, what's wrong with a battery need to change every 1/2 years ? Did anyone promise that battery will last forever ? Is a downtime of 30mins each year to replace the battery a hassle for the enterprise industry server, let alone a consumer. Next please figure out where the battery is in the sequence of I/O write operations from the application down to the storage device platters. Versus the UPS battery, how is it different. Have a deep thought on what are the different scenarios of system failures and how they fail a storage system. You will then understand what is the value of each kind of electricity contingency system in the whole operation. Both UPS and battery backup cache protect against different form of failure. Electrical is not the only reason why a system fails, even flawless operating system fails because of ions in the air (have u heard of it ? I know about it from experts managing data centres)

I somehow have the feeling people in some part of the world assume things are design to run forever, without considering migration plans. After you factor in the migration plans, it seems strange to have support for some stuffs after 5 or 10 years, because a migration process would have refresh the whole suite of H/W involved. Just because it's not happening in the consumer world, doesn't mean it's not happening in the enterprise world. True enough things don't move fast enough in the some part of the enterprising world, but it doesn't mean they are refreshing slow. When a sweeping statement is made between S/W Raid and H/W Raid, perhaps it would be wise to indicate which portion of the industry one is referring to.
 
Last edited:

wallacetan

Member
Joined
May 21, 2000
Messages
129
Reaction score
0
May I acquire what is your experience with enterprise H/W Raid options,

I have managed servers running Windows NT4, 2000 and currently 2003, 2008R2.

I have used hardware RAID in Windows 2000 and 2003 servers.
All I can remember about the NT4 servers setup, was 2 beige HP Proliant tower servers.

If I remember correctly, more then 10 years ago.
Windows 2000 server A and B:
Adaptec hardware RAID card (4 IDE ports) with additional 128MB SDRAM
- RAID5 (4x IDE hard disks)

Windows 2000 server C:
Adaptec hardware RAID card (2 IDE ports)
- Hardware RAID1 (2x IDE hard drives) for system disk
Promise hardware RAID card (4 IDE ports)
- Hardware RAID5 (4x IDE hard drives)

Windows 2000 server D:
Promise hardware RAID card (8 IDE ports)
- Hardware RAID5 (8x IDE hard drives)

Windows 2003 server A:
Highpoint RocketRaid with 8 port running 2 arrays:
- Hardware RAID1 (2x 320GB SATA HDD) for system disk
- Hardware RAID5 (6x 500GB SATA HDD)

Windows 2003 server B:
- Software RAID1 (2x 320GB SATA HDD) on-board SATA for system disk
- 3Ware hardware RAID5 (8x 750GB SATA HDD)

All my hardware RAID cards do not have Battery Backup Unit (BBU) or Battery Backed Cache, as I have UPS for the servers in the office and the servers in the datacenter have redundant power supply (UPS+diesel power generators).

For the new servers running Windows 2008R2, I am using only software RAID 1 (mirror) with 6x 3TB drives on Linux KVM virtualization host.
There are 4x software RAID 1 arrays (md0,md1,md2,md3). Using LVM to allocate storage blocks to Windows guests OS.

Also there are other Linux/BSD (Fedora, Slackware, Centos, pfsense) servers I have managed, all using software RAID1 and RAID10.

I hope you are satisfied with my resume. :)

which specific model you have managed and where did you conclude that H/W manufacturer in the enterprise industry do not offers a piece of H/W replacement in no more than 5 years.

Highpoint RocketRaid is no longer distributed in Singapore.
If I want to get a replacement hardware RAID, I have to get from Amazon or ebay.
Similarly, the more expensive 3Ware hardware RAID card is no longer available.

What do you suggest I do with the 2 servers running Highpoint RocketRaid and 3Ware?

Next is just because you don't see 100% CPU utilization means the S/W Raid is a good enough option ? Add on: I didn't say S/W Raid is not a good option, but how does such conclude H/W Raid is not good ?

How do you exactly fix RAID operations on just one core or one processor, can you advice ? I have yet to come across CPU affinity for RAID operations or disk related operations on any kernel available, perhaps you can advise ?

What I am trying to illustrate is:
Any modern CPUs with more then 2 cores will not have any performance issues running software RAID5 or ZFS RAID-Z.
The MIPs on a single core of a multi-core CPU is much much greater then the miniscule MIPs of the dedicated XOR processor on the hardware RAID card.

I don't see the need to set CPU affinity for RAID operations.
Because other non-RAID XOR (md) processes, like apache can always use the other CPU cores, let the OS dedicate the CPU core to whichever process that needs it, there are many cores to go around. My current cheap Xeon E3-1230 already have 4 cores and 8 threads. I would think the bottleneck is on the Read/Write speeds of the (non-SSD) SATA hard disks.

About battery, what's wrong with a battery need to change every 1/2 years ? Did anyone promise that battery will last forever ? Is a downtime of 30mins each year to replace the battery a hassle for the enterprise industry server, let alone a consumer.

My point is:
Can you get that battery for that model of hardware RAID card after 1-2 years?
I sure don't have a crystal ball.
A company might be good or reputable now, but might not be around in 3-5 years.
Remember Compaq, Sun Solaris, Palm? All were good reputable companies, but not around now.
And who can predict if Nokia or Blackberry will be around in the next 3-5 years?

Also, you cannot stock up batteries, as the lifespan of batteries starts when it leaves the production line.

Next please figure out where the battery is in the sequence of I/O write operations from the application down to the storage device platters. Versus the UPS battery, how is it different. Have a deep thought on what are the different scenarios of system failures and how they fail a storage system. You will then understand what is the value of each kind of electricity contingency system in the whole operation. Both UPS and battery backup cache protect against different form of failure. Electrical is not the only reason why a system fails, even flawless operating system fails because of ions in the air (have u heard of it ? I know about it from experts managing data centres)

This is the first time I have heard this (fails because of ions in the air), all I have read about Battery Backed Cache is for protection against loss of power.

If you believe this (fails because of ions in the air) scenario is more likely then plain power failure, then yes you should invest in Battery Backed Cache.

However, I believe otherwise, hardware RAID product marketing materials only highlights Battery Backed Cache is to protect against normal power failure.

RAID - Wikipedia, the free encyclopedia
For data safety, the write-back cache of an operating system or individual drive might need to be turned off in order to ensure that as much data as possible is actually written to secondary storage before some failure (such as a loss of power); unfortunately, turning off the write-back cache has a performance penalty that can be significant depending on the workload and command queuing[jargon] support. In contrast, a hardware RAID controller may carry a dedicated battery-powered write-back cache of its own, thereby allowing for efficient operation that is also relatively safe. Fortunately, it is possible to avoid such problems with a software controller by constructing a RAID with safer components; for instance, each drive could have its own battery or capacitor on its own write-back cache, and the drive could implement atomicity in various ways, and the entire RAID or computing system could be powered by a UPS, etc.

I somehow have the feeling people in some part of the world assume things are design to run forever, without considering migration plans. After you factor in the migration plans, it seems strange to have support for some stuffs after 5 or 10 years, because a migration process would have refresh the whole suite of H/W involved. Just because it's not happening in the consumer world, doesn't mean it's not happening in the enterprise world. True enough things don't move fast enough in the some part of the enterprising world, but it doesn't mean they are refreshing slow. When a sweeping statement is made between S/W Raid and H/W Raid, perhaps it would be wise to indicate which portion of the industry one is referring to.

No one can predict hardware RAID card failure, or if the hardware RAID card replacement is still available in 3-5 years.
What if hardware RAID card failed before the planned migration in 3-5 years?

Even if you plan migration in 3 years. You cannot predict that hardware RAID card will be available within 3 years.

I don't assume hardware run forever, I have replaced countless hard disks, PSUs, etc. I have migrated server hardware from NT4, 2000, 2003 and 2008R2 over the past >10 years.

BTW, the worst type of hardware RAID card are those that have a tiny fan on RAID card, if the fan fails, you will not know about it and the hardware RAID card may fail due to overheating. Because it was designed to be used with a fan. Also, it is impossible to get a replacement fan.

I don't subscribe to 'enterprise' level hardware. To me, I think it is just another way hardware vendors (EMC, Sun Solaris) can sell you overpriced hardware.

I believe we can think out of-the-box like these guys, Petabytes on a Budget v2.0: Revealing More Secrets
NOTE:
Backblaze designed an 'enterprise' level (135TB) storage solution cheaper then Dell or EMC. It is no surprise that Backblaze's 135TB storage pod is using Software RAID.

From my resume, you can see that I started with hardware RAID, after much thinking and debating with myself, I come to the conclusion that software RAID 1 using Linux MDADM will be used for my new and future server setup.
 
Last edited:

Piezoq

Senior Member
Joined
Mar 31, 2010
Messages
1,568
Reaction score
0
I have managed servers running Windows NT4, 2000 and currently 2003, 2008R2.

I have used hardware RAID in Windows 2000 and 2003 servers.
All I can remember about the NT4 servers setup, was 2 beige HP Proliant tower servers.

If I remember correctly, more then 10 years ago.
Windows 2000 server A and B:
Adaptec hardware RAID card (4 IDE ports) with additional 128MB SDRAM
- RAID5 (4x IDE hard disks)

Windows 2000 server C:
Adaptec hardware RAID card (2 IDE ports)
- Hardware RAID1 (2x IDE hard drives) for system disk
Promise hardware RAID card (4 IDE ports)
- Hardware RAID5 (4x IDE hard drives)

Windows 2000 server D:
Promise hardware RAID card (8 IDE ports)
- Hardware RAID5 (8x IDE hard drives)

Windows 2003 server A:
Highpoint RocketRaid with 8 port running 2 arrays:
- Hardware RAID1 (2x 320GB SATA HDD) for system disk
- Hardware RAID5 (6x 500GB SATA HDD)

Windows 2003 server B:
- Software RAID1 (2x 320GB SATA HDD) on-board SATA for system disk
- 3Ware hardware RAID5 (8x 750GB SATA HDD)

All my hardware RAID cards do not have Battery Backup Unit (BBU) or Battery Backed Cache, as I have UPS for the servers in the office and the servers in the datacenter have redundant power supply (UPS+diesel power generators).

For the new servers running Windows 2008R2, I am using only software RAID 1 (mirror) with 6x 3TB drives on Linux KVM virtualization host.
There are 4x software RAID 1 arrays (md0,md1,md2,md3). Using LVM to allocate storage blocks to Windows guests OS.

Also there are other Linux/BSD (Fedora, Slackware, Centos, pfsense) servers I have managed, all using software RAID1 and RAID10.

I hope you are satisfied with my resume. :)



Highpoint RocketRaid is no longer distributed in Singapore.
If I want to get a replacement hardware RAID, I have to get from Amazon or ebay.
Similarly, the more expensive 3Ware hardware RAID card is no longer available.

What do you suggest I do with the 2 servers running Highpoint RocketRaid and 3Ware?



What I am trying to illustrate is:
Any modern CPUs with more then 2 cores will not have any performance issues running software RAID5 or ZFS RAID-Z.
The MIPs on a single core of a multi-core CPU is much much greater then the miniscule MIPs of the dedicated XOR processor on the hardware RAID card.

I don't see the need to set CPU affinity for RAID operations.
Because other non-RAID XOR (md) processes, like apache can always use the other CPU cores, let the OS dedicate the CPU core to whichever process that needs it, there are many cores to go around. My current cheap Xeon E3-1230 already have 4 cores and 8 threads. I would think the bottleneck is on the Read/Write speeds of the (non-SSD) SATA hard disks.



My point is:
Can you get that battery for that model of hardware RAID card after 1-2 years?
I sure don't have a crystal ball.
A company might be good or reputable now, but might not be around in 3-5 years.
Remember Compaq, Sun Solaris, Palm? All were good reputable companies, but not around now.
And who can predict if Nokia or Blackberry will be around in the next 3-5 years?

Also, you cannot stock up batteries, as the lifespan of batteries starts when it leaves the production line.



This is the first time I have heard this (fails because of ions in the air), all I have read about Battery Backed Cache is for protection against loss of power.

If you believe this (fails because of ions in the air) scenario is more likely then plain power failure, then yes you should invest in Battery Backed Cache.

However, I believe otherwise, hardware RAID product marketing materials only highlights Battery Backed Cache is to protect against normal power failure.

RAID - Wikipedia, the free encyclopedia



No one can predict hardware RAID card failure, or if the hardware RAID card replacement is still available in 3-5 years.
What if hardware RAID card failed before the planned migration in 3-5 years?

Even if you plan migration in 3 years. You cannot predict that hardware RAID card will be available within 3 years.

I don't assume hardware run forever, I have replaced countless hard disks, PSUs, etc. I have migrated server hardware from NT4, 2000, 2003 and 2008R2 over the past >10 years.

BTW, the worst type of hardware RAID card are those that have a tiny fan on RAID card, if the fan fails, you will not know about it and the hardware RAID card may fail due to overheating. Because it was designed to be used with a fan. Also, it is impossible to get a replacement fan.

I don't subscribe to 'enterprise' level hardware. To me, I think it is just another way hardware vendors (EMC, Sun Solaris) can sell you overpriced hardware.

I believe we can think out of-the-box like these guys, Petabytes on a Budget v2.0: Revealing More Secrets
NOTE:
Backblaze designed 'enterprise' level 135-terabyte storage solution cheaper then Dell or EMC. It is no surprise that Backblaze's 135-terabyte storage pod is using Software RAID.

From my resume, you can see that I started with hardware RAID, after much thinking and debating with myself, I come to the conclusion that software RAID 1 using Linux MDADM will be used for my new and future server setup.

Software Raid is a viable and very robust option in Linux, Solaris etc. On Windows, it's more of a challenge because it doesn't port well hardware-wise

Bottomline, if you know what you're doing, one or the other will be better/fit for purpose. But, the big question is, does the individual have a good enough grasp of the nuances?

Raid can be a nightmare for the uninitiated, when it comes to troubleshooting. There are pros and cons to every Raid solution, hardware, software or fakeware.
 

CyberTron

Senior Member
Joined
Jan 1, 2000
Messages
1,276
Reaction score
0
On HP raid controllers, the battery on the raid controller supposedly help improved write performance. In those raid controllers, they usually have memory caching there. In the newer generation controllers, the battery are attached to the memory module itself, so the memory could be transported to any raid controller in event of failure on the controller itself.

Supposedly, without the battery, write operations needs to be physically written to the disks before acknowledgement is send back to the system. With the battery on the controller, acknowledge is sent back the moment it is in the raid controller cache memory. So in the event there is a power failure, the unwritten operations in the cache memory will be preserved until power is restored.
 

wallacetan

Member
Joined
May 21, 2000
Messages
129
Reaction score
0
On HP raid controllers, the battery on the raid controller supposedly help improved write performance. In those raid controllers, they usually have memory caching there. In the newer generation controllers, the battery are attached to the memory module itself, so the memory could be transported to any raid controller in event of failure on the controller itself.

Supposedly, without the battery, write operations needs to be physically written to the disks before acknowledgement is send back to the system. With the battery on the controller, acknowledge is sent back the moment it is in the raid controller cache memory. So in the event there is a power failure, the unwritten operations in the cache memory will be preserved until power is restored.

Thank you for pointing out the performance gain using Battery backed write-back cache.

I did missed it when reading up on battery back cache.

A further research shows:
Battery backed cache for Linux software raid (md / mdadm)?

Linux: Why software/hardware RAID?
Why prefer Linux hardware RAID?
  • Battery-backed write-back cache may improve write throughput.

Yes I do see your point on storage performance.

However, from my experience, I find that I can improve performance more then 100x by optimising the application software, e.g. SQL queries, database indexes, caching.

As compared to optimising hardware, where the gains are somewhat lacking.
e.g. 7.2k rpm to 15k rpm SATA drives. Only 2x increase.

Another example:
Compare Windows 7 to Vista on the same hardware.
WinVista performance is terrible as compared to Win7.
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,391
Reaction score
1,180
I have managed servers running Windows NT4, 2000 and currently 2003, 2008R2.

I have used hardware RAID in Windows 2000 and 2003 servers.
All I can remember about the NT4 servers setup, was 2 beige HP Proliant tower servers.

If I remember correctly, more then 10 years ago.
Windows 2000 server A and B:
Adaptec hardware RAID card (4 IDE ports) with additional 128MB SDRAM
- RAID5 (4x IDE hard disks)

Windows 2000 server C:
Adaptec hardware RAID card (2 IDE ports)
- Hardware RAID1 (2x IDE hard drives) for system disk
Promise hardware RAID card (4 IDE ports)
- Hardware RAID5 (4x IDE hard drives)

Windows 2000 server D:
Promise hardware RAID card (8 IDE ports)
- Hardware RAID5 (8x IDE hard drives)

Windows 2003 server A:
Highpoint RocketRaid with 8 port running 2 arrays:
- Hardware RAID1 (2x 320GB SATA HDD) for system disk
- Hardware RAID5 (6x 500GB SATA HDD)

Windows 2003 server B:
- Software RAID1 (2x 320GB SATA HDD) on-board SATA for system disk
- 3Ware hardware RAID5 (8x 750GB SATA HDD)

All my hardware RAID cards do not have Battery Backup Unit (BBU) or Battery Backed Cache, as I have UPS for the servers in the office and the servers in the datacenter have redundant power supply (UPS+diesel power generators).

For the new servers running Windows 2008R2, I am using only software RAID 1 (mirror) with 6x 3TB drives on Linux KVM virtualization host.
There are 4x software RAID 1 arrays (md0,md1,md2,md3). Using LVM to allocate storage blocks to Windows guests OS.

Also there are other Linux/BSD (Fedora, Slackware, Centos, pfsense) servers I have managed, all using software RAID1 and RAID10.

I hope you are satisfied with my resume. :)

Good resume indeed :) Hence we can move on to more realistic talk rather than just on the paper :)

Before we proceed, lets keep this in mind. How much are you paid monthly, and with your 1 year worth of pay, how much do you think your impressive software raid knowledge is worth ? Suppose I get a "just know how to follow instruction strictly, knows how to walk properly in a data centre, and knows how to open up the rack and replace the harddisk without switching off the power to the server, and knows how to check if the raid has completed" engineer, how much do you think his/her worth is versus your requirement to handle a software raid system, which can range from Linux MD to Solaris ZFS.

Now that's not technology, but that is realism. Since you have >10 years worth of knowledge handling enterprise environments, you should have a good idea what is the pay grade of a good engineer versus one that follows the working schedule strictly. Having downtime can happen any time, any moment, 24 by 7, simple is always the winner.

Highpoint RocketRaid is no longer distributed in Singapore.
If I want to get a replacement hardware RAID, I have to get from Amazon or ebay.
Similarly, the more expensive 3Ware hardware RAID card is no longer available.

What do you suggest I do with the 2 servers running Highpoint RocketRaid and 3Ware?

I'm not sure how you get to manage such subpar systems, but when we spec for H/W, we look at EOL of the hardware. When I spec for projects, I do question about how old is the H/W since released and we consider how long this project is going to run or expected to run. For maintenance cost as vendor recommendation to customer, if they ask about it, we consider the need to upgrade system when they reach EOL.

If you ask for me suggestion, I suggest you upgrade them. If you don't have the budget for it, that's another story altogether, isn't it ? I think in the enterprise world, we can't be too shabby at times, especially when company are earning big bucks out of those services, it's there obligation to keep such things in place so that they are continue to earn more.

What I am trying to illustrate is:
Any modern CPUs with more then 2 cores will not have any performance issues running software RAID5 or ZFS RAID-Z.
The MIPs on a single core of a multi-core CPU is much much greater then the miniscule MIPs of the dedicated XOR processor on the hardware RAID card.

I don't see the need to set CPU affinity for RAID operations.
Because other non-RAID XOR (md) processes, like apache can always use the other CPU cores, let the OS dedicate the CPU core to whichever process that needs it, there are many cores to go around. My current cheap Xeon E3-1230 already have 4 cores and 8 threads. I would think the bottleneck is on the Read/Write speeds of the (non-SSD) SATA hard disks.

Fantastic, technologically wise, that's true. I totally agree with it. Today is not like 10years ago. Things are so fast, so is inflation. Now we know CPU as general purpose system are powerful beast, today cores are cheap, but wait a moment, H/W Raid card are cheaper.... So given your resume, I have to ask, how does Oracle, IBM, and all those big companies charge their fantastic applications licenses. If I spend X dollars on the application to license on the CPU, am I silly enough to use even 1 CPU cycle on it for storage management ?

I myself have design and deploy a whole suite of antivirus + antispam engines running a postfix mail server serving like 20+ domains, each day, we have no less than 100+ mails per minute, it can reach much higher during peak hours like during the start of your working day and towards the end of the day. Somehow people like to rush emails before they knock off. I have seen a software raid 1 system, just raid 1, going super highload with really lousy performance on a pair of 10K SAS Harddisk. Now obviously the question is, how do I conclude that it must be software RAID ? Perhaps it will just suffer just the same under H/W Raid. A higher level question will be, perhaps we have underspec the H/W during sizing.

Luckily enough, we have another similar system that is on H/W Raid. For migration reason, we have that ready for production use and we transfer the system over. Immediately we observed a significant decrease in the I/O pressure. I think the scenario is pretty informative. Until you are really running on systems that are eating your I/O like daily meal, one will not observe the value in H/W Raid.

But this brings about another question, does it mean the whole problem is about just that fair bit of performance you can get out of H/W Raid. In the latter part of your post, you have discussed about how better design and more efficient system are better option for the solution. When I discuss infra, I don't come from this background solely, my job scope is in solution architect on overall including the need to size system and develop system from scratch. Therefore having well thought out solution from the highest level in business, to the lowest level in infra is part of my job scope.

Unfortunately while the solution is so simple, it is extremely hard to achieve in real life scenario. For starter, not everyone holds a PhD. Which means, where we can settle the performance in machines, we at times have to use that as a solution to higher level algorithms. I have mention in another thread, today industry has no lack of developers, we have lack of solution architects and a good abundance of superior software engineers. As such, the idea of better design not necessarily is a solution to the problem. The problem of having resources to come up with good design is already a problem itself. While it may not be the best solution, it's a solution that money can buy and its pretty consistent if you ask me. At least it won't ask for a pay rise after 2 years.

I'm certainly not pessimistic, but it's a fact we need to look into and resolve. So how does H/W Raid comes in ? Back to the very early notion of how much do you think you are worth.

My point is:
Can you get that battery for that model of hardware RAID card after 1-2 years?
I sure don't have a crystal ball.
A company might be good or reputable now, but might not be around in 3-5 years.
Remember Compaq, Sun Solaris, Palm? All were good reputable companies, but not around now.
And who can predict if Nokia or Blackberry will be around in the next 3-5 years?

Also, you cannot stock up batteries, as the lifespan of batteries starts when it leaves the production line.

My answer to your question is your knowledge to the EOL of the H/W used. We all don't have crystal ball, but we have intuition and current affairs skill sets. Also we have contingency plans if we know some company is going down or having bad reputations. At higher level in the department, I believe one important skill set is to know what is going around in the industry, and be prepared for it. The same you describe about RAID card and it's accessories which are just small components, how about SAN system ? Servers, load-balancers, switches etc. Are these vendors not going to face the risk you mentioned ? Is this therefore a good enough reason to stop using these H/W ? Obviously you can say there is no choice. Not like there are software servers around....... erm is virtual systems also considered virtual... just a side thought. Hence the essence to the problem is not about how long the H/W will last, but how much you know it will last. When procurement is made, is there any black and white involve to ensure support from the vendor. Are these terms and conditions secure during procurement ? If these are not considered, what is there to talk about ?


This is the first time I have heard this (fails because of ions in the air), all I have read about Battery Backed Cache is for protection against loss of power.

If you believe this (fails because of ions in the air) scenario is more likely then plain power failure, then yes you should invest in Battery Backed Cache.

However, I believe otherwise, hardware RAID product marketing materials only highlights Battery Backed Cache is to protect against normal power failure.

RAID - Wikipedia, the free encyclopedia

Well since it's your first time, then count you lucky that I'm making it known to you. Not like it's some school books Q&A. I know the fellow pretty well doing project previously in one of the ISP in SG. It's a couple of years back. The problem is too much iron dusts and ions in the air. Causing electrical shocks to the electrical components in the servers in the data centre. Every now and then, there will be a couple of cases of servers rebooting out of nowhere. The problem is solved only after they perform a through cleanup of the data centre with a good flush of the air in it and eventually stop the outbreak.

The reason for this to be brought up is bit flips while not common are occuring for various reasons. Environmental reasons are also part of it. So electrical inputs are not entirely the equation. UPS protect the system electrical consistency, but they do not protect from kernel panics. Neither do they protect from certain subsystem break down from within. Battery Backed Cache is a performance feature protection mechanism, namely the write back cached from the RAID system down to the HDD. Upon the system is brought back online, the write is replay into the HDD for data consistency. Now this do not means UPS is in any form unnecessary. Like wise the same for BBU. Their usage and what they protect against are different. If you really need a documentation, read up the RedHat recommendation on how write barriers on filesystem are not required if BBU is available. If it's write through cache, we don't even bother about BBU. But even so, one need to understand just because the RAID gives back a success write, doesn't it it's on platter. While we are here dismissing about a technology, do you exactly know which cache are we talking about. We have the kernel file buffer, we have the RAID cache, and we have the disk cache. Which feature protect against which ?


No one can predict hardware RAID card failure, or if the hardware RAID card replacement is still available in 3-5 years.
What if hardware RAID card failed before the planned migration in 3-5 years?

Even if you plan migration in 3 years. You cannot predict that hardware RAID card will be available within 3 years.

I don't assume hardware run forever, I have replaced countless hard disks, PSUs, etc. I have migrated server hardware from NT4, 2000, 2003 and 2008R2 over the past >10 years.

BTW, the worst type of hardware RAID card are those that have a tiny fan on RAID card, if the fan fails, you will not know about it and the hardware RAID card may fail due to overheating. Because it was designed to be used with a fan. Also, it is impossible to get a replacement fan.

If your H/W fails before your migration plan, then you should replace your H/W card. Is it a hard decision to make ? If you are testing me on what the H/W card is not available. My question back to you is why is your migration plan not putting it the EOL of the H/W ? Is it because someone screw up somewhere ? What are the options for your migration ? Now let me ask you, what if the server fails before your migration plan and no more servers is available for you to migrate, and oh yes, you are using Solaris and it just stop production yesterday. It's a weird question, but how come you are not alerted that the company is in crisis, or why are you not aware that you have a pretty legacy RAID card that requires migration and this is not brought up to the authority ?

If really your H/W card is not available, can't you DD over ? The world doesn't end right ? Something has to be done if it's not handle properly right from the start. It's easy to say, if I did DD a whole got damn HDD before.

I don't subscribe to 'enterprise' level hardware. To me, I think it is just another way hardware vendors (EMC, Sun Solaris) can sell you overpriced hardware.

Maybe you are right, they are suckers if you ask me. But reality is reality. Your consumer budget just don't fit the price tag of these enterprise game. You want it cheap, then we don't talk business here. I definitely love to have a piece of SAS HDD with the price tag of 100+, but reality says NO. Where do you think the money is ?

I believe we can think out of-the-box like these guys, Petabytes on a Budget v2.0: Revealing More Secrets
NOTE:
Backblaze designed an 'enterprise' level (135TB) storage solution cheaper then Dell or EMC. It is no surprise that Backblaze's 135TB storage pod is using Software RAID.

From my resume, you can see that I started with hardware RAID, after much thinking and debating with myself, I come to the conclusion that software RAID 1 using Linux MDADM will be used for my new and future server setup.

Well like you, I often have to think out of the box, but that doesn't permit me to dismiss about certain technologies. Some goes obsolete, some goes irrelevant with the progress of technologies. But since we are still alive now, we have to look at real situation. What I have described about human resources are real issues we face. When you discuss about enterprise, when you discuss about money, and technology, you don't leave anyone out of the loop. You put them together, you will see the relevance.

If I tell you, any day, I replace you, yes, a couple of H/W card versus your knowledge about Linux MD, or LVM Raid. It sounds ugly, but you are more expensive than a couple of H/W Raid cards. I believe the cost to migrate in half the EOL of each H/W is still cheaper than paying your salary. On the assumption that you are costly and you have higher expectation of your pay.

Conclusion. Software RAID is good, Hardware RAID has its worth. If we are just discussing blindly on one aspect and we dismiss the other, you are the winner. So what does it entail ? You can carry on worship S/W Raid, but you won't want to admit the cost is not just about H/W right ? But if we embrace the whole equation together, we can start to see how an old proven technology comes in useful at times. When systems are young and computation performance are not there, H/W raid comes to the rescue. As processors upgraded, H/W Raid loses its competency in this area, it somehow got hold of itself in another department, mainly consistency, low-tech skillsets and efficiency. We have better options, but is that all ? Better options don't bring new problems ? If they do, what are our options ? Perhaps proven and reliable is what it is, or perhaps breaking out of the box for better options far ahead ? I'm not exactly so optimistic about resources competency if you ask me :) It's a hard nut to crack.
 
Last edited:

wallacetan

Member
Joined
May 21, 2000
Messages
129
Reaction score
0
Good resume indeed :)
Thanks!

Suppose I get a "just know how to follow instruction strictly, knows how to walk properly in a data centre, and knows how to open up the rack and replace the harddisk without switching off the power to the server, and knows how to check if the raid has completed" engineer, how much do you think his/her worth is versus your requirement to handle a software raid system, which can range from Linux MD to Solaris ZFS.

Now that's not technology, but that is realism. Since you have >10 years worth of knowledge handling enterprise environments, you should have a good idea what is the pay grade of a good engineer versus one that follows the working schedule strictly. Having downtime can happen any time, any moment, 24 by 7, simple is always the winner.

Yes, I feel your pain when I work with IT engineers who need training wheels, which is common in Singapore.
So the practical solution is to outsource, spend money to solve the problem, like in this case, use hardware RAID instead of software RAID due to incompetent IT engineers.
This will continue to feed the incompetence problem with more reasons to be more incompetent.

I would like to believe otherwise, for Singapore's IT engineers can and should move out of their comfort zone and learn another 'new' technology (software RAID). At least to put a stop to this endless self feeding cycle of incompetency.

Besides, managing a software RAID array on linux is really not that hard. Less then 10 commands to know. And there are plenty of help in the internet for Linux software RAID. Also there is Google search.

how to check if the raid has completed
Code:
cat /proc/mdstat
OR
watch cat /proc/mdstat

I'm not sure how you get to manage such subpar systems, but when we spec for H/W, we look at EOL of the hardware. When I spec for projects, I do question about how old is the H/W since released and we consider how long this project is going to run or expected to run. For maintenance cost as vendor recommendation to customer, if they ask about it, we consider the need to upgrade system when they reach EOL.

Looks good on paper, planned EOL by the hardware manufacturer, but what happens when the distributor stops the product line in Singapore?
What if due to the low sales figures/demand, the hardware RAID card distributor terminates this product/model?
You need to get it from Amazon or ebay, can you wait for the shipment?

My preference for using software RAID is because it uses commodity hardware that is widely available and available all the time in SLS.

With commodity hardware, you don't need planned EOL by a specific hardware manufacturer, because there is always another compatible product from another manufacturer. i.e. No vendor lock-in.

If you ask for me suggestion, I suggest you upgrade them. If you don't have the budget for it, that's another story altogether, isn't it ? I think in the enterprise world, we can't be too shabby at times, especially when company are earning big bucks out of those services, it's there obligation to keep such things in place so that they are continue to earn more.

I work in a SME.
I have seen some clients 'enterprise' solutions and in my opinion, they always overpay for oversized hardware solutions. Because clients will always believe the sales rep for 'enterprise' hardware.
I have not seen any undersized or right-sized 'enterprise' solution that clients use.
Clients like to hear stuff, like "you need to cater for future business growth", "you need to future-proof you hardware" or "what will your boss think if the solution does not perform".
Most of these sales rep will offer these comments without even looking at the numbers, current usage and calculating the disk IOPS requirements.

Fantastic, technologically wise, that's true. I totally agree with it. Today is not like 10years ago. Things are so fast, so is inflation. Now we know CPU as general purpose system are powerful beast, today cores are cheap, but wait a moment, H/W Raid card are cheaper.... So given your resume, I have to ask, how does Oracle, IBM, and all those big companies charge their fantastic applications licenses. If I spend X dollars on the application to license on the CPU, am I silly enough to use even 1 CPU cycle on it for storage management ?

Since we are digressing to 'enterprise' use of hardware RAID (Oracle & IBM applications licenses).
I can only imagine Oracle DB's server uses an external block storage, connected by FC or iSCSI to a SAN storage supplied by EMC?
For the enterprise, EMC SAN storage solution should come with a support contact and SLA.
In this case, I don't care whether if it is hardware or software RAID, cos EMC support contract will take care of it.

However, as we are discussing here, we are responsible the support of the hardware RAID card, so if we cannot get a replacement hardware RAID card for whatever reason, we are screwed.

I myself have design and deploy a whole suite of antivirus + antispam engines running a postfix mail server serving like 20+ domains, each day, we have no less than 100+ mails per minute, it can reach much higher during peak hours like during the start of your working day and towards the end of the day. Somehow people like to rush emails before they knock off. I have seen a software raid 1 system, just raid 1, going super highload with really lousy performance on a pair of 10K SAS Harddisk. Now obviously the question is, how do I conclude that it must be software RAID ? Perhaps it will just suffer just the same under H/W Raid. A higher level question will be, perhaps we have underspec the H/W during sizing.

Great that you have setup and managed mail hosting before.

I have setup mail servers using Exim+ClamAV+SpamAssassin+Dovecot and Exim+ClamAV+SpamAssassin+DBmail.
Yes, I do see disk IO issues on software RAID 1 setup.
But hardware RAID is not the solution to this problem.

If you look at the processes spike when mail volumes are higher, I would bet that it would most likely be SpamAssassin (SA). After SMTP data is received by the MTA (Exim), before it gets routed and saved to the target mailbox, it gets written to a spool file, where SA inspects the file and gives it a spam rating. This is where I believe the bottleneck is.

There are a few (software) MTA tricks you can use to solve this without hardware RAID, if you are interested, PM me or start another thread.

Luckily enough, we have another similar system that is on H/W Raid. For migration reason, we have that ready for production use and we transfer the system over. Immediately we observed a significant decrease in the I/O pressure. I think the scenario is pretty informative. Until you are really running on systems that are eating your I/O like daily meal, one will not observe the value in H/W Raid.

But this brings about another question, does it mean the whole problem is about just that fair bit of performance you can get out of H/W Raid. In the latter part of your post, you have discussed about how better design and more efficient system are better option for the solution. When I discuss infra, I don't come from this background solely, my job scope is in solution architect on overall including the need to size system and develop system from scratch. Therefore having well thought out solution from the highest level in business, to the lowest level in infra is part of my job scope.

Most System Integrators(SI) in Singapore think of systems the way you describe it. Throw high-end and expensive RAM, CPU, Hard disks and hardware RAID card into one expensive box. They always see it as one box. And most SI are unable to correctly identify bottlenecks, and always jump to conclusion and propose more 'enterprise' hardware is required.

If you think out-of-one-box system design, like Google, Backblaze, Apple, etc.
Heard of GoogleFS - Google's distributed file system?
You don't need ONE pricey box, you should be using multiple cheap boxes running commodity hardware, not some exotic hardware where you need to call vendor to get a quotation and wait 1-2 weeks for the shipment.

Your first example:
Why did you suggest running storage services and Oracle DB on the same box?

And your second example, your system design:
Did you split the SMTP servers (Exim/Postfix/Sendmail) and IMAP/POP3 services to 2 boxes?
Did you split anti-spam and anti-virus filtering into 2 more boxes?

Are you running SMTP,POP3,IMAP,database on the same box?
Even so, did you run all these different services on different physical hard drives?
For example:
- 6x 1TB HDD (sda, sdb, sdc, sdd, sde, sdf)
- 3x software RAID 1 (mirror) arrays
md0 (sda+sdb)
md1 (sdc+sdd)
md2 (sde+sdf)
- md0 used for SMTP anti-spam and anti-virus spool directory
- md1 used for mail directories (Dovecot mailboxes)
- md2 used for database files

These are just some system optimization that can be done without using hardware RAID, which will give better performance gains then switching to hardware RAID.

Unfortunately while the solution is so simple, it is extremely hard to achieve in real life scenario. For starter, not everyone holds a PhD. Which means, where we can settle the performance in machines, we at times have to use that as a solution to higher level algorithms. I have mention in another thread, today industry has no lack of developers, we have lack of solution architects and a good abundance of superior software engineers. As such, the idea of better design not necessarily is a solution to the problem. The problem of having resources to come up with good design is already a problem itself. While it may not be the best solution, it's a solution that money can buy and its pretty consistent if you ask me. At least it won't ask for a pay rise after 2 years.

I'm certainly not pessimistic, but it's a fact we need to look into and resolve. So how does H/W Raid comes in ? Back to the very early notion of how much do you think you are worth.

Better or faster hardware can never be the solution for terrible slow performance software.
Best example: Windows Vista performance VS Windows XP/7 on the same hardware.

My experience:
Performance gains from better hardware CPU:
Xeon E3-1220 3.1 GHz ($189)
Xeon E3-1290 3.6 GHz ($885)
Speed (GHz) increased by 3.6/3.1=16%
Price increased by 885/189=368%

Performance gains using software methods, add index to database table for slow queries:
Database select query time for 1 record before adding index: 11 seconds
Database select query time for 1 record after adding index: 0.01 seconds
Performance increased by 11/0.01=1100%

As you can see, from my experience, no mater what kind of expensive hardware you can throw at my database server, there is no way you can match my performance gains from a simple software optimization. This is where most SI fail to understand.

My answer to your question is your knowledge to the EOL of the H/W used. We all don't have crystal ball, but we have intuition and current affairs skill sets. Also we have contingency plans if we know some company is going down or having bad reputations. At higher level in the department, I believe one important skill set is to know what is going around in the industry, and be prepared for it. The same you describe about RAID card and it's accessories which are just small components, how about SAN system ? Servers, load-balancers, switches etc. Are these vendors not going to face the risk you mentioned ? Is this therefore a good enough reason to stop using these H/W ? Obviously you can say there is no choice. Not like there are software servers around....... erm is virtual systems also considered virtual... just a side thought. Hence the essence to the problem is not about how long the H/W will last, but how much you know it will last. When procurement is made, is there any black and white involve to ensure support from the vendor. Are these terms and conditions secure during procurement ? If these are not considered, what is there to talk about ?

If I am buying the storage solution with a support contract and SLA, I don't care if it is hardware or software RAID. Just as long as it works.

If I am buying the hardware RAID card, I would reconsider.
And I think this is what the TS is asking.

Well since it's your first time, then count you lucky that I'm making it known to you. Not like it's some school books Q&A. I know the fellow pretty well doing project previously in one of the ISP in SG. It's a couple of years back. The problem is too much iron dusts and ions in the air. Causing electrical shocks to the electrical components in the servers in the data centre. Every now and then, there will be a couple of cases of servers rebooting out of nowhere. The problem is solved only after they perform a through cleanup of the data centre with a good flush of the air in it and eventually stop the outbreak.

I thought data center air-conditioning units have dust filters or some additional HEPA filters. Don't know of much about this to comment further.

The reason for this to be brought up is bit flips while not common are occuring for various reasons. Environmental reasons are also part of it. So electrical inputs are not entirely the equation. UPS protect the system electrical consistency, but they do not protect from kernel panics. Neither do they protect from certain subsystem break down from within. Battery Backed Cache is a performance feature protection mechanism, namely the write back cached from the RAID system down to the HDD. Upon the system is brought back online, the write is replay into the HDD for data consistency. Now this do not means UPS is in any form unnecessary. Like wise the same for BBU. Their usage and what they protect against are different. If you really need a documentation, read up the RedHat recommendation on how write barriers on filesystem are not required if BBU is available. If it's write through cache, we don't even bother about BBU. But even so, one need to understand just because the RAID gives back a success write, doesn't it it's on platter. While we are here dismissing about a technology, do you exactly know which cache are we talking about. We have the kernel file buffer, we have the RAID cache, and we have the disk cache. Which feature protect against which ?

Yes, I do know of disk read/write cache, but how about the performance gains? More then 2x disk IO performance?
Have you looked at 100x performance optimisation before you look at 2x disk optimisation?
Don't just optimize one box, think out-of-one-box, there are better ways as highlighted above.

If your H/W fails before your migration plan, then you should replace your H/W card. Is it a hard decision to make ? If you are testing me on what the H/W card is not available. My question back to you is why is your migration plan not putting it the EOL of the H/W ? Is it because someone screw up somewhere ? What are the options for your migration ? Now let me ask you, what if the server fails before your migration plan and no more servers is available for you to migrate, and oh yes, you are using Solaris and it just stop production yesterday. It's a weird question, but how come you are not alerted that the company is in crisis, or why are you not aware that you have a pretty legacy RAID card that requires migration and this is not brought up to the authority ?

Well, as they say, hindsight is always 20/20, how many here knows that Sun Solaris is going down? How many IT engineers made a bundle when they traded Sun/Palm/Compaq shares if they know this inside news?
Do you know if EMC is crisis? How about 3ware, are they also in crisis?
If you do, you can always bet on it by trading their shares or stock-up on the 3ware hardware RAID cards.

If really your H/W card is not available, can't you DD over ? The world doesn't end right ? Something has to be done if it's not handle properly right from the start. It's easy to say, if I did DD a whole got damn HDD before.

Yes, but there will be unacceptable down time.

Maybe you are right, they are suckers if you ask me. But reality is reality. Your consumer budget just don't fit the price tag of these enterprise game. You want it cheap, then we don't talk business here. I definitely love to have a piece of SAS HDD with the price tag of 100+, but reality says NO. Where do you think the money is ?

Besides being suckers, when we buy technology, we will always be users, our level of technical competency will never improve.

Well like you, I often have to think out of the box, but that doesn't permit me to dismiss about certain technologies. Some goes obsolete, some goes irrelevant with the progress of technologies. But since we are still alive now, we have to look at real situation. What I have described about human resources are real issues we face. When you discuss about enterprise, when you discuss about money, and technology, you don't leave anyone out of the loop. You put them together, you will see the relevance.

If I tell you, any day, I replace you, yes, a couple of H/W card versus your knowledge about Linux MD, or LVM Raid. It sounds ugly, but you are more expensive than a couple of H/W Raid cards. I believe the cost to migrate in half the EOL of each H/W is still cheaper than paying your salary. On the assumption that you are costly and you have higher expectation of your pay.

Conclusion. Software RAID is good, Hardware RAID has its worth. If we are just discussing blindly on one aspect and we dismiss the other, you are the winner. So what does it entail ? You can carry on worship S/W Raid, but you won't want to admit the cost is not just about H/W right ? But if we embrace the whole equation together, we can start to see how an old proven technology comes in useful at times. When systems are young and computation performance are not there, H/W raid comes to the rescue. As processors upgraded, H/W Raid loses its competency in this area, it somehow got hold of itself in another department, mainly consistency, low-tech skillsets and efficiency. We have better options, but is that all ? Better options don't bring new problems ? If they do, what are our options ? Perhaps proven and reliable is what it is, or perhaps breaking out of the box for better options far ahead ? I'm not exactly so optimistic about resources competency if you ask me :) It's a hard nut to crack.

I am here to share and learn.
I learn more when my believes or ideas are challenged.
Good to hear your passion for your work and technology here, sometimes hard to find people like this.

This reminds me of another good enterprise topic, "Enterprise Backup: Tape is dead", but that is for another thread.
 
Last edited:

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,391
Reaction score
1,180
Finally just deploy a snapshots scripts in the Amazon, time to get back to reality.

Yes, I feel your pain when I work with IT engineers who need training wheels, which is common in Singapore.
So the practical solution is to outsource, spend money to solve the problem, like in this case, use hardware RAID instead of software RAID due to incompetent IT engineers.
This will continue to feed the incompetence problem with more reasons to be more incompetent.

I would like to believe otherwise, for Singapore's IT engineers can and should move out of their comfort zone and learn another 'new' technology (software RAID). At least to put a stop to this endless self feeding cycle of incompetency.

Besides, managing a software RAID array on linux is really not that hard. Less then 10 commands to know. And there are plenty of help in the internet for Linux software RAID. Also there is Google search.


Code:
cat /proc/mdstat
OR
watch cat /proc/mdstat

Well Guess there isn't much to say here. Are we talking about the future here ? If you are , we are on the same track. The future requires better engineers. The future need better architects. I am part of the company team in training developers on their unix skill sets in the company. So seriously, what your concern are is not new to me. I have so much expectation for the noobs to be trained in system clustering, san storage manage, system monitoring skillets, fault tolerant system design etc. However that is on the wish list.

In your whole response, you totally left out my crucial question for you, how much do you think your pay worth ? Seems like you answer me load of other things and speculation, but left out the more important question of all. Your renumeration. You want a bunch of good engineers in the vendors side doing maintenance, you want a proficient group of engineers in the data centre capable of handling projects of different nature, configuration done differently, different operating systems ranging from Windows, Solaris, HP-UX, Linux, AIX and so forth... and how much are you going to pay them ? peanuts ? And they will do all these for you for peanuts ? Seems like it's not the world that we are living in, isn't it ?

How much do you think the mandays will be bloated if the company doing the project is using good engineers with these skill sets ? How much is the data centre, is going to pay for engineers which are good enough to handle your S/W Raid configuration ? At the end of the day, which I have emphasize from time to time, it's cost and skillsets issue. Your ideal of training engineers with such skillsets will indeed give raise to the overall IT standards, not only in Singapore. But are you ready for them to ask you for pay rise, because now they feel they have better skillsets and worth more ? Am I being purely sinister here, or this is the world we are living it and somehow it's not taken into account when doing the sum ?

Looks good on paper, planned EOL by the hardware manufacturer, but what happens when the distributor stops the product line in Singapore?
What if due to the low sales figures/demand, the hardware RAID card distributor terminates this product/model?
You need to get it from Amazon or ebay, can you wait for the shipment?

My preference for using software RAID is because it uses commodity hardware that is widely available and available all the time in SLS.

With commodity hardware, you don't need planned EOL by a specific hardware manufacturer, because there is always another compatible product from another manufacturer. i.e. No vendor lock-in.

It not only looks good on paper, it works. If not, we would be already be in chaos meanwhile. Your S/W Raid didn't come a long way before H/W Raid is the standard for the enterprise industry. Since there is so much speculation on what if the RAID card distributor go bust the next day, I probably have more concern your company go bust the next day and your company cannot uphold the SLA for the project tendered. So does it mean, perhaps client's shouldn't come to you ? After all, it's not an easy task to take over another company's codebase. It can be a few order magnitude more convoluted than the RAID system we are talking about here.

While your concern are valid, you seems to be way over bloated its assurance. If you have so much concern about firms reputation and hence using commodity items is the way to go, then it's definitely an option. However it does not spell anything wrong with the H/W Raid. It's a preference, it's a concern, you adjust as you wish. I have deployments of >5 years and still the H/W are supported. So while your concern is valid, it's not good enough to justify one over the other.

I work in a SME.
I have seen some clients 'enterprise' solutions and in my opinion, they always overpay for oversized hardware solutions. Because clients will always believe the sales rep for 'enterprise' hardware.
I have not seen any undersized or right-sized 'enterprise' solution that clients use.
Clients like to hear stuff, like "you need to cater for future business growth", "you need to future-proof you hardware" or "what will your boss think if the solution does not perform".
Most of these sales rep will offer these comments without even looking at the numbers, current usage and calculating the disk IOPS requirements.

I can't comment on what you see. But I can agree there are sales that are way over the head. But's that those cases. Likewise, I have cases where it's done properly too. Somehow risk can be calculated in dollars and cents too. Contracts are there to protect, they cover risk and they are not free. They are part and parcel of the project cost. If you are just looking at the mechanise, then its really undercounting.

Since we are digressing to 'enterprise' use of hardware RAID (Oracle & IBM applications licenses).
I can only imagine Oracle DB's server uses an external block storage, connected by FC or iSCSI to a SAN storage supplied by EMC?
For the enterprise, EMC SAN storage solution should come with a support contact and SLA.
In this case, I don't care whether if it is hardware or software RAID, cos EMC support contract will take care of it.

Not entirely true for all projects. Not all projects are of such scale to use external SAN storage. SAN for Oracle is often done for RAC. But besides RAC, there are other configurations which don't require SAN storage, or external storage. Local storage on both servers done in Active/Passive manner suffix. As such, we don't use S/W for such system. At the end of the day, I'm not sure if we are even on the same page. If I start going into system design, I can easily justify for the need of H/W Raid versus S/W Raid in a fully H/A architecture.

Seriously I'm not sure on your side. If I will to propose software raid on some high tier project, the client will ask you go fly kite. You are not even in the league when you talk about S/W Raid. Now that's real situation, but that's not a replacement for technologies right ? Will I go and debate about this with the client ? There's only one word from them, it's not compliant to their FM operation and they are not comfortable. No need to talk more. Wanna be in the game, join in. I have also encountered clients that dismiss Active/Active setup via Network LB and insist for HP Veritas Clustering using Active/Passive approach. Their reason ? In sufficient knowledge in this area for their folks. Can we blame them, sure you can. You can argue about how they should be more proficient and so forth. But end of the day, it comes back to
1) Can I get sufficient resources ?
2) Even if you are willing to pay, there isn't just such proficient resources coming in.
Are we going to ignore these obstacles right infront of our eyes and just keep on hope for the best ? If you are the trainer, what will you say if the people just don't get it, or not at that level ?

However, as we are discussing here, we are responsible the support of the hardware RAID card, so if we cannot get a replacement hardware RAID card for whatever reason, we are screwed.

Yes, I can agree where you can coming from and therefore it seems more viable to go for S/W Raid. That's a valid reason, that's ok. But besides going to S/W Raid, raise the resource cost, could other avenues be available ? Have other such avenues been explored ? If they have been, and still your decision is S/W Raid. I don't see the problem. Just because it's the case for you, doesn't mean H/W Raid is not working elsewhere with the right attitude and right procedures.

Great that you have setup and managed mail hosting before.

I have setup mail servers using Exim+ClamAV+SpamAssassin+Dovecot and Exim+ClamAV+SpamAssassin+DBmail.
Yes, I do see disk IO issues on software RAID 1 setup.
But hardware RAID is not the solution to this problem.

If you look at the processes spike when mail volumes are higher, I would bet that it would most likely be SpamAssassin (SA). After SMTP data is received by the MTA (Exim), before it gets routed and saved to the target mailbox, it gets written to a spool file, where SA inspects the file and gives it a spam rating. This is where I believe the bottleneck is.

Don't worry about my setup, I basically simplified the situation when I explain. My setup is way more complicated than just having one antivirus system. I have a total of 5 in the whole architecture. They are all wired up using perl servers running in multiple stages under the postfix pipeline. The whole system touches pretty much every aspect of the linux system when I architecture it few years back.

Most System Integrators(SI) in Singapore think of systems the way you describe it. Throw high-end and expensive RAM, CPU, Hard disks and hardware RAID card into one expensive box. They always see it as one box. And most SI are unable to correctly identify bottlenecks, and always jump to conclusion and propose more 'enterprise' hardware is required.

here i think you are missing the point. I have not advocated that throwing in money is the way to go. But it is surely one way to go after you measure the options available. Right from the start, I did not say S/W Raid is not good, it has its flexible pros, it has its cons in terms of performance and skillset requirement. Just because you have the skillset doesnt necessarily means your peers have it. When you are out of the company, who is going to take over ? When your project is launched in the client, who is going to take over the replacement of the H/W ? SOmetimes it's the vendor, sometimes clients have FM Team capable of doing it. If they are doing it, are you going to teach them how to perform S/W Raid. When they have some incompetent engineers screw up the system during the S/W Raid recovery, is your company going to come in without cost ? At the end of the day, who pay for the cost ? Now H/W Raid can also screw up, but be frank, which is easier ? Most engineers in the data centre, but today standard are pretty much proficient with changing HDD, but it wouldn't say the same for login into your system, typing all those arcane commands and knowing how they can go wrong and when it does, how to rectify. Would you want to factor the education required into your project cost ? Why do you think they are going to learn from you ?

We are talk whole day about how the world should progress, how engineers should be better educated, but talk wouldn't make things comes true.

If you think out-of-one-box system design, like Google, Backblaze, Apple, etc.
Heard of GoogleFS - Google's distributed file system?
You don't need ONE pricey box, you should be using multiple cheap boxes running commodity hardware, not some exotic hardware where you need to call vendor to get a quotation and wait 1-2 weeks for the shipment.

Why are you still here ? Why ain't you in google since you adore their technologies. :) Have you tried RHEL Clustering ? Have you architecture such systems before ? Have you did up clustering filesystem before ? I have. I have did such project, clustering 7 RHEL servers using RHCS. In it even have a DRBD filesystem. I can tell you it's way out of the skillset of most engineers. Such large scale clustering FS are not simple beast. They are way more complicated than S/W Raid...

Thinking out of the box is my everyday job. Now that I have involved into Amazon Cloud, I can illustrate to you, it's way more complicated then what most would envision about cloud. Cloud isn't simpler than your physical infrastructure.

Seriously we can put your suggestion for system optimisation aside. I appreciate them. I have been doing system design for years, so it's not strange to me on your suggestion on data sharding and distributed concepts no matter it's on software layer or hardware layer.

Better or faster hardware can never be the solution for terrible slow performance software.
Best example: Windows Vista performance VS Windows XP/7 on the same hardware.

My experience:
Performance gains from better hardware CPU:
Xeon E3-1220 3.1 GHz ($189)
Xeon E3-1290 3.6 GHz ($885)
Speed (GHz) increased by 3.6/3.1=16%
Price increased by 885/189=368%

Performance gains using software methods, add index to database table for slow queries:
Database select query time for 1 record before adding index: 11 seconds
Database select query time for 1 record after adding index: 0.01 seconds
Performance increased by 11/0.01=1100%

As you can see, from my experience, no mater what kind of expensive hardware you can throw at my database server, there is no way you can match my performance gains from a simple software optimization. This is where most SI fail to understand.

It's good that you know what you are dealing with, and you know how to quantify your results. But it's a pure understatement to use this to justify on the statement of "most SI fail to understand". Is performance all about how SI spec and design ? Well I won't go further, but your results hardly even quantify the business and resource cost.

If I am buying the storage solution with a support contract and SLA, I don't care if it is hardware or software RAID. Just as long as it works.

If I am buying the hardware RAID card, I would reconsider.
And I think this is what the TS is asking.

Well I don't know how your company procure then :) We do in bulk sets.

Yes, I do know of disk read/write cache, but how about the performance gains? More then 2x disk IO performance?
Have you looked at 100x performance optimisation before you look at 2x disk optimisation?
Don't just optimize one box, think out-of-one-box, there are better ways as highlighted above.

Well, as they say, hindsight is always 20/20, how many here knows that Sun Solaris is going down? How many IT engineers made a bundle when they traded Sun/Palm/Compaq shares if they know this inside news?
Do you know if EMC is crisis? How about 3ware, are they also in crisis?
If you do, you can always bet on it by trading their shares or stock-up on the 3ware hardware RAID cards.

What are we at here ? Lack of industry current affairs ? Nevermind. you are right. Maybe we all don't know what is going on around us.

Yes, but there will be unacceptable down time.

I can do it without major downtime, you want to try me ? :) Of course, when I say DD, it's more than just DD. But how much your company want to pay me for my knowledge in this area ? That's what I'm trying to quantify after all, isn't it ? Of course, I'm no god, I can't do everything. Somethings just need to do it the hard way if there are no options, but seriously how often is that ?

Besides being suckers, when we buy technology, we will always be users, our level of technical competency will never improve.

What role are you in when you discuss about it ? Business or education ?

I am here to share and learn.
I learn more when my believes or ideas are challenged.
Good to hear your passion for your work and technology here, sometimes hard to find people like this.

This reminds me of another good enterprise topic, "Enterprise Backup: Tape is dead", but that is for another thread.

I'm all in for passion talk. Lets just say at the end of the day, this discussion is not going to change the world. You would like to use S/W Raid, it's ok. You can even architecture some extremely scalable system, HadoopFS, LustreFS, GFS, GoogleFS, Ceph, Gluster etc... They are not your normal replacement of for redundancy storage. Don't simplify it. They solve your scalabiilty issues with other cost involved. I have been following up in these clustering solution as part of my project design, but meanwhile the cost of these system are not necessarily cheaper. They are scalable. When you have clustering components, you need also special fencing devices for different part of the subcomponents, server fencing, storage fencing, network fencing depending on what are your shared resources. These ain't cheap. But on really large scale projects, they can be alot more cost effective than a single or pair of high performance storage node. Again cost, skillsets. Seems like we can't get out from these.

Just because there are better technologies out there means they are suitable for projects. Projects have different sizes and different stackholders. Projects that different nature. Not all much fit into the model that Google does it, or Facebook does it. Even if it does, you need to consider the skillset of your folks that are able to pull it off. Also the operation teams and the infra teams. HOw much changes to the existing infrastructure to adapt to it. Well I can go on, but that's the point, bottomline, it's not simple.

It's all good really. When the time comes, H/W Raid loses its attractiveness, it shall be let go. No technology can escape from this. From the first day I step into this industry, I have already recognize the only thing that doesn't change, its Change. Meanwhile what I differ from you, is I see the value in H/W Raid in some other departments, why you decide to jump ship. Different perspective. Doesn't matter, if it works for you, that's good for you. Meanwhile I still see some areas where they don't adds up, and I'm not going to dismiss something well proven to date.
 
Last edited:

wallacetan

Member
Joined
May 21, 2000
Messages
129
Reaction score
0
In your whole response, you totally left out my crucial question for you, how much do you think your pay worth ? Seems like you answer me load of other things and speculation, but left out the more important question of all. Your renumeration. You want a bunch of good engineers in the vendors side doing maintenance, you want a proficient group of engineers in the data centre capable of handling projects of different nature, configuration done differently, different operating systems ranging from Windows, Solaris, HP-UX, Linux, AIX and so forth... and how much are you going to pay them ? peanuts ? And they will do all these for you for peanuts ? Seems like it's not the world that we are living in, isn't it ?

How much do you think the mandays will be bloated if the company doing the project is using good engineers with these skill sets ? How much is the data centre, is going to pay for engineers which are good enough to handle your S/W Raid configuration ? At the end of the day, which I have emphasize from time to time, it's cost and skillsets issue. Your ideal of training engineers with such skillsets will indeed give raise to the overall IT standards, not only in Singapore. But are you ready for them to ask you for pay rise, because now they feel they have better skillsets and worth more ? Am I being purely sinister here, or this is the world we are living it and somehow it's not taken into account when doing the sum ?

I am unaware that my remuneration is important to this discussion, also I am uncomfortable to share it in a public forum.

If you need an IT engineers salary to make a point or calculation, why not use your own? Or how much you would offer me to work for your company?

Also, you did not ask for the Thread Starter's salary before to determine if hardware RAID will save his/her hourly rates.

I don't see how salaries of IT engineers determine if they should know software RAID?

BTW, I am not a systems or hardware engineer, I am just a software engineer. How do you put a price on my knowledge of software RAID?

It not only looks good on paper, it works. If not, we would be already be in chaos meanwhile. Your S/W Raid didn't come a long way before H/W Raid is the standard for the enterprise industry. Since there is so much speculation on what if the RAID card distributor go bust the next day, I probably have more concern your company go bust the next day and your company cannot uphold the SLA for the project tendered. So does it mean, perhaps client's shouldn't come to you ? After all, it's not an easy task to take over another company's codebase. It can be a few order magnitude more convoluted than the RAID system we are talking about here.

Yes, I believe H/W Raid is still the standard for the enterprise industry.
But, my intuition see the shift in storage solutions, starting to move to commodity hardware and services (e.g. Amazon S3).

While your concern are valid, you seems to be way over bloated its assurance. If you have so much concern about firms reputation and hence using commodity items is the way to go, then it's definitely an option. However it does not spell anything wrong with the H/W Raid. It's a preference, it's a concern, you adjust as you wish. I have deployments of >5 years and still the H/W are supported. So while your concern is valid, it's not good enough to justify one over the other.

When you see the shift to commodity hardware and services, the next logical question is what will happen to hardware RAID manufacturers.

Remember creative sound cards? Now who buys sound cards anymore, when it is a commodity hardware that is on the motherboard.

What about video cards? CPU's on-chip GPU is getting so much better.
I don't see a new GPU chip company started to compete with the existing companies.

Is there any new hardware RAID manufacturer that is started?

The same story goes for mainframe computing, any new mainframe manufacturers?

Yes, you can still buy sound cards, video cards and hardware RAID cards.
I don't know when they will stop selling it.
But, I do know that they are selling less of it.

I am unwilling to bet my data on it, and I do have data backups.

Not entirely true for all projects. Not all projects are of such scale to use external SAN storage. SAN for Oracle is often done for RAC. But besides RAC, there are other configurations which don't require SAN storage, or external storage. Local storage on both servers done in Active/Passive manner suffix. As such, we don't use S/W for such system. At the end of the day, I'm not sure if we are even on the same page. If I start going into system design, I can easily justify for the need of H/W Raid versus S/W Raid in a fully H/A architecture.

Agreed, there is a place for hardware Raid. Though getting more and more limited.

Seriously I'm not sure on your side. If I will to propose software raid on some high tier project, the client will ask you go fly kite. You are not even in the league when you talk about S/W Raid. Now that's real situation, but that's not a replacement for technologies right ? Will I go and debate about this with the client ? There's only one word from them, it's not compliant to their FM operation and they are not comfortable. No need to talk more. Wanna be in the game, join in. I have also encountered clients that dismiss Active/Active setup via Network LB and insist for HP Veritas Clustering using Active/Passive approach. Their reason ? In sufficient knowledge in this area for their folks. Can we blame them, sure you can. You can argue about how they should be more proficient and so forth. But end of the day, it comes back to
1) Can I get sufficient resources ?
2) Even if you are willing to pay, there isn't just such proficient resources coming in.
Are we going to ignore these obstacles right infront of our eyes and just keep on hope for the best ? If you are the trainer, what will you say if the people just don't get it, or not at that level ?

I don't do what you do, and I am sure you can do it better.
Yes, the customer is always right.

Yes, I can agree where you can coming from and therefore it seems more viable to go for S/W Raid. That's a valid reason, that's ok. But besides going to S/W Raid, raise the resource cost, could other avenues be available ? Have other such avenues been explored ? If they have been, and still your decision is S/W Raid. I don't see the problem. Just because it's the case for you, doesn't mean H/W Raid is not working elsewhere with the right attitude and right procedures.

Don't worry about my setup, I basically simplified the situation when I explain. My setup is way more complicated than just having one antivirus system. I have a total of 5 in the whole architecture. They are all wired up using perl servers running in multiple stages under the postfix pipeline. The whole system touches pretty much every aspect of the linux system when I architecture it few years back.

Can share your mail architecture?
You have a total of 5 servers?
How many MX records and servers do you have?
Can describe your 'multiple stages' and 'postfix pipeline'?
I am only experienced in Exim pipeline.

Are you using 3rd party filtering like Postini?

Just because you have the skillset doesnt necessarily means your peers have it. When you are out of the company, who is going to take over ? When your project is launched in the client, who is going to take over the replacement of the H/W ? SOmetimes it's the vendor, sometimes clients have FM Team capable of doing it. If they are doing it, are you going to teach them how to perform S/W Raid. When they have some incompetent engineers screw up the system during the S/W Raid recovery, is your company going to come in without cost ? At the end of the day, who pay for the cost ? Now H/W Raid can also screw up, but be frank, which is easier ? Most engineers in the data centre, but today standard are pretty much proficient with changing HDD, but it wouldn't say the same for login into your system, typing all those arcane commands and knowing how they can go wrong and when it does, how to rectify. Would you want to factor the education required into your project cost ? Why do you think they are going to learn from you ?

On the flip side, it is good that the my skillset is unique and I can keep my job. Also, good from an IT vendor's POV, there is vendor lock-in when using software RAID.

We are talk whole day about how the world should progress, how engineers should be better educated, but talk wouldn't make things comes true.

Yes, talk don't make things come true.

But, ideas do, if I keep advising young and lowly paid IT engineers that they cannot do software RAID because it is too complicated, I will be perpetuating the incompetent IT engineers problem that both of us face.

Why are you still here ? Why ain't you in google since you adore their technologies. :) Have you tried RHEL Clustering ? Have you architecture such systems before ? Have you did up clustering filesystem before ? I have. I have did such project, clustering 7 RHEL servers using RHCS. In it even have a DRBD filesystem. I can tell you it's way out of the skillset of most engineers. Such large scale clustering FS are not simple beast. They are way more complicated than S/W Raid...

Thinking out of the box is my everyday job. Now that I have involved into Amazon Cloud, I can illustrate to you, it's way more complicated then what most would envision about cloud. Cloud isn't simpler than your physical infrastructure.

It's good that you know what you are dealing with, and you know how to quantify your results. But it's a pure understatement to use this to justify on the statement of "most SI fail to understand". Is performance all about how SI spec and design ? Well I won't go further, but your results hardly even quantify the business and resource cost.

I agree, the business and resource may cost more for improving performance using software optimization, but the same performance improvement cannot be done with hardware.
Most of the time, for application performance problems, the solution lies in the software not the hardware.
Software optimization should be done first before looking into hardware optimization.

Need another performance issued resolved by software?
OT: DBMail Administrator (DBMA) Performance Fix - by Wallace Tan on Fri, 03 Apr 2009 00:19:46 -0700
Code:
The slow query is:
SELECT COUNT(*) FROM dbmail_messageblks;

Running this slow query took 138 seconds (2 min 18.09 sec)
SELECT COUNT(*) FROM dbmail_messageblks;
+----------+
| COUNT(*) |
+----------+
|   262788 |
+----------+
1 row in set (2 min 18.09 sec)

After optimizing the SQL, it took 0.27 seconds.
SELECT COUNT(*) FROM dbmail_messageblks[COLOR="Red"] use index(physmessage_id_index)[/COLOR];
+----------+
| COUNT(*) |
+----------+
|   262796 |
+----------+
1 row in set (0.27 sec)

138 seconds down to 0.27 seconds
138/0.27=51,100% improvement or 511x faster.

Can you propose any hardware solution that can match this?

Like I said, "most SI fail to understand".

Well I don't know how your company procure then :) We do in bulk sets.

I am on SME budget, what can I say? :o

I can do it without major downtime, you want to try me ? :) Of course, when I say DD, it's more than just DD. But how much your company want to pay me for my knowledge in this area ? That's what I'm trying to quantify after all, isn't it ? Of course, I'm no god, I can't do everything. Somethings just need to do it the hard way if there are no options, but seriously how often is that ?

I have seen your other posts, and your advise on Linux and open source, seems very skilled indeed.

What role are you in when you discuss about it ? Business or education ?
More like an observation: If make a product "idiot-proof", you get more idiots.
It is like tech support for end-users, because users only know how to push the ON button.
Anything more, it is too complicated.
It is a blackbox, cannot open to repair, if push button don't work, order another blackbox wait for 1-2 weeks for shipment.

I'm all in for passion talk. Lets just say at the end of the day, this discussion is not going to change the world. You would like to use S/W Raid, it's ok. You can even architecture some extremely scalable system, HadoopFS, LustreFS, GFS, GoogleFS, Ceph, Gluster etc... They are not your normal replacement of for redundancy storage. Don't simplify it. They solve your scalabiilty issues with other cost involved. I have been following up in these clustering solution as part of my project design, but meanwhile the cost of these system are not necessarily cheaper. They are scalable. When you have clustering components, you need also special fencing devices for different part of the subcomponents, server fencing, storage fencing, network fencing depending on what are your shared resources. These ain't cheap. But on really large scale projects, they can be alot more cost effective than a single or pair of high performance storage node. Again cost, skillsets. Seems like we can't get out from these.

Just because there are better technologies out there means they are suitable for projects. Projects have different sizes and different stackholders. Projects that different nature. Not all much fit into the model that Google does it, or Facebook does it. Even if it does, you need to consider the skillset of your folks that are able to pull it off. Also the operation teams and the infra teams. HOw much changes to the existing infrastructure to adapt to it. Well I can go on, but that's the point, bottomline, it's not simple.

It's all good really. When the time comes, H/W Raid loses its attractiveness, it shall be let go. No technology can escape from this. From the first day I step into this industry, I have already recognize the only thing that doesn't change, its Change. Meanwhile what I differ from you, is I see the value in H/W Raid in some other departments, why you decide to jump ship. Different perspective. Doesn't matter, if it works for you, that's good for you. Meanwhile I still see some areas where they don't adds up, and I'm not going to dismiss something well proven to date.

I am just sharing my experience with hardware RAID and why I switch my new storage system to software RAID.

I am not an enterprise solutions engineer. Just a SME IT guy.

My observations are:
Commodity hardware always move into the enterprise space.
PC replaces Spark workstations and IBM mainframes.
SSD used in consumer space, moving into enterprise space.

And limited production and sales of hardware RAID is the first signal to 'jump ship'.
 
Last edited:

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,391
Reaction score
1,180
I am unaware that my remuneration is important to this discussion, also I am uncomfortable to share it in a public forum.

If you need an IT engineers salary to make a point or calculation, why not use your own? Or how much you would offer me to work for your company?

Also, you did not ask for the Thread Starter's salary before to determine if hardware RAID will save his/her hourly rates.

I don't see how salaries of IT engineers determine if they should know software RAID?

BTW, I am not a systems or hardware engineer, I am just a software engineer. How do you put a price on my knowledge of software RAID?

First, when I embark on a lengthy discussion with you, I'm not on the original topic of TS. However my focus is on explanation the rationale of H/W Raid, it's value in the ecosystem, and how one can find value in it. As far as TS is in concern, if TS's budget is consumer grade, we don't recommend H/W Raid, for it doesn't suit TS purpose and price range. That's because soley from TS's perspective, his main concern is only cost as the bulk of the concern. Reliability and maintenance is a concern, but it's not until the cost is resolved.

You don't learn to be a professional of a particular department without upgrading the other skillsets around you. That's how the hierarchy climb for any job. Since I know about clustering and high end storage design, my skill set of development is also not going to be a noob "usually". That's when the expectation of the engineer is raise and the employer will most certainly need to adjust the renumeration if the company need this talent.

A engineer with deeper knowledge in S/W Raid management in the enterprise industry will certainly knows linux management skill sets, perhaps not full fledge, but certainly more in-depth compared to a low-end engineer who knows hows how to look at alert screens and react to customer's call, admin and logistic stuffs. Based on my asking around for years, the asking price for a system engineer who knows more conservatively will be 1.3X more than one who just hang around in the data centre. This is a cost we need to factor in when considering technologies.

Between a fault tolerant system, such as Stratus FT Servers, can be easily manage by a low-tech engineer, the skillset to manage a clustered architected system will need a much more techy engineer. For such skillset, my gauge is roughly 1.5X to 1.8X more in renumeration.

When you add up the cost per month to have such engineers hanging around, the cost of to run projects are much higher and definitely easily cover a couple of high end RAID cards and high end servers.

When you move up to my level, you will see the value when architecting systems and calculating costing and knowledge of who around in the company can handle this project. With projects rounding around all the time, not all good techs are always at your disposal.

Yes, I believe H/W Raid is still the standard for the enterprise industry.
But, my intuition see the shift in storage solutions, starting to move to commodity hardware and services (e.g. Amazon S3).

True enough the industry have success story commodity hardware. Those are worth looking at for projects. But bottomline, are you always spec-ing projects of their size to call in the cost saving ? There is upfront cost and maintenance cost, did Amazon tell you that because they use commodity hardware with more advance technologies, therefore the cost to maintain becomes cheaper ? When you are spec-ing systems that worth billions of dollars, the extra effort and cost to come up with better design make sense. They can be amortize across the board, but typically for alot of other projects, having 20 servers is already alot. Those cost sometimes cannot be easily justified just talking about H/W. You need to consider competency and resource availability. I think I have stress this far enough. If at the end of the day, what you want is just to look at the H/W cost and call your opensource S/W technique cheaper ? Fine, you are right. There is nothing much to talk about here.

When you see the shift to commodity hardware and services, the next logical question is what will happen to hardware RAID manufacturers.

Remember creative sound cards? Now who buys sound cards anymore, when it is a commodity hardware that is on the motherboard.

What about video cards? CPU's on-chip GPU is getting so much better.
I don't see a new GPU chip company started to compete with the existing companies.

Is there any new hardware RAID manufacturer that is started?

The same story goes for mainframe computing, any new mainframe manufacturers?

Are you in the Financial Industry ? Sometime long ago, COBOL is still written. Mainframes are still running. The article write, "we are not talking about a couple of lines, we are talking about new modules with thousands of line written". Don't bury your head in the SME with what you see and start saying what you don't see. I have friends in the Financial industry telling me a different story of what the banks are using.


I am unwilling to bet my data on it, and I do have data backups.

Agreed, there is a place for hardware Raid. Though getting more and more limited.

It's your call.


I don't do what you do, and I am sure you can do it better.
Yes, the customer is always right.

Can share your mail architecture?
You have a total of 5 servers?
How many MX records and servers do you have?
Can describe your 'multiple stages' and 'postfix pipeline'?
I am only experienced in Exim pipeline.

Are you using 3rd party filtering like Postini?

I won't tell you corporate IP, but I can tell you, it's a distributed design that mails goes through mulitple perl filters across multiple servers. That doesn't stop the load when under DDOS.

On the flip side, it is good that the my skillset is unique and I can keep my job. Also, good from an IT vendor's POV, there is vendor lock-in when using software RAID.

Glad you know there is something called lock-in. Just like how your company need to lock you in when you wrote something no one is good enough to take over properly. Company doesn't like this, so does customers.

Yes, talk don't make things come true.

But, ideas do, if I keep advising young and lowly paid IT engineers that they cannot do software RAID because it is too complicated, I will be perpetuating the incompetent IT engineers problem that both of us face.

You should ask the engineers to be wiser, learn more. But that doesn't mean it has to go your way, even if it does, it's not NOW. NOW I have tenders to rush, I have design to coem up with, so how ? Wait for the talent to come in ? Too late....

I agree, the business and resource may cost more for improving performance using software optimization, but the same performance improvement cannot be done with hardware.
Most of the time, for application performance problems, the solution lies in the software not the hardware.
Software optimization should be done first before looking into hardware optimization.


Need another performance issued resolved by software?
OT: DBMail Administrator (DBMA) Performance Fix - by Wallace Tan on Fri, 03 Apr 2009 00:19:46 -0700


138 seconds down to 0.27 seconds
138/0.27=51,100% improvement or 511x faster.

Can you propose any hardware solution that can match this?

Like I said, "most SI fail to understand".

:) I thought I have already mentioned it in a couple of post back, that it is not strange to me at all ? I don't see why you keep on need to convince me in something I have already agree. But my answer to you is you are missing the point about performance as the sole reason why H/W is not making it's worth.


I am on SME budget, what can I say? :o



I have seen your other posts, and your advise on Linux and open source, seems very skilled indeed.


More like an observation: If make a product "idiot-proof", you get more idiots.
It is like tech support for end-users, because users only know how to push the ON button.
Anything more, it is too complicated.
It is a blackbox, cannot open to repair, if push button don't work, order another blackbox wait for 1-2 weeks for shipment.

I applaud to your passion in teaching the noobs. I'm sure they will benefit from it. But before things start happening, my design still need to push out with or without your intended talent search.

Okay. Thanks for the discussion. I think we can stop it here :)
I have convey what is necessary for you to know. I have already said from time to time S/W Raid works, but I believe H/W Raid has it value and I have understood how to apply it. If it ain't for you, that's too bad. You can go the S/W Raid, it really doesn't bother me, because I do both, I don't just do one. That's the difference here.

TO be frank, this thing about H/W or S/W Raid is never a top level concern in projects I spec, and all those reliability and sustainability issues you have mentioned are greatly covered by my architecture of clustering and always at least 2 servers serving the same content or doing the same work. So even if the project need to perform migration, it can be done without the service going down. Hence the H/W Raid is a simple compliance for the customer and its FM Team. For my worries, which I have so much to worry about, this RAID thingie is at the bottom of the list. I have overall architecture design, network design, applications licenses with partners and other vendors to bother. When you start looking at things in this level, you will be overwhelm if you need to talk about such things. I have already shared with you how your concern is actually no concern when you have the whole architecture done properly. I even have to think about maintaince after project launch and costing. If you can't appreciate at this level, you can't appreciate how it seems like a dime talking about choosing H/W Raid and S/W Raid. Much like choosing which nail to use when you are building a tower.

I have only come to share what is the value in H/W Raid, if it can't get you to look at things from a different perspective, then it's alright.
 
Last edited:

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,391
Reaction score
1,180
OT: DBMail Administrator (DBMA) Performance Fix - by Wallace Tan on Fri, 03 Apr 2009 00:19:46 -0700


138 seconds down to 0.27 seconds
138/0.27=51,100% improvement or 511x faster.

Can you propose any hardware solution that can match this?

Like I said, "most SI fail to understand".

Allow me to say this. What you have shown does not impress me at all. I'm not trying to be difficult or offending, but before you starting showing me this kind of example as an example of how optimization is, I can daringly tell you, I have been practicing optimization more advanced than these during my school days in the uni.

So please, don't show me more optimization example to convince me what algorithmic optimization is. Your example don't even come close to what I meant by algorithmic optimization. It's nothing more that knowing that index should be use when you do your select count to avoid table scan. I wouldn't even classify this under optimisation, it's in fact a practice. If as a software engineer, you didn't manage to do this, basically you failed. That's SQL 101.

If you are so keen to explain to me, perhaps tell me, how do you conclude that the SQL statement is not optimized ? I think this is abit more interesting than showing me something I already knew 10 years ago.

And please do the read this before you call your method a good solution why before we start instructing the query optimiser what to do, what you ought to have done first.
http://www.mysqldiary.com/the-battle-between-force-index-and-the-query-optimizer/

Just because of this SQL101 example, you want to illustrate "MOST" SI failed to understand ? I think that's too bold and big a claim it is. I wonder if you have met "MOST" SI to even make this understatement.
 
Last edited:

wallacetan

Member
Joined
May 21, 2000
Messages
129
Reaction score
0
Allow me to say this. What you have shown does not impress me at all. I'm not trying to be difficult or offending, but before you starting showing me this kind of example as an example of how optimization is, I can daringly tell you, I have been practicing optimization more advanced than these during my school days in the uni.

So please, don't show me more optimization example to convince me what algorithmic optimization is. Your example don't even come close to what I meant by algorithmic optimization. It's nothing more that knowing that index should be use when you do your select count to avoid table scan. I wouldn't even classify this under optimisation, it's in fact a practice. If as a software engineer, you didn't manage to do this, basically you failed. That's SQL 101.

I myself have design and deploy a whole suite of antivirus + antispam engines running a postfix mail server serving like 20+ domains, each day, we have no less than 100+ mails per minute, it can reach much higher during peak hours like during the start of your working day and towards the end of the day. Somehow people like to rush emails before they knock off. I have seen a software raid 1 system, just raid 1, going super highload with really lousy performance on a pair of 10K SAS Harddisk. Now obviously the question is, how do I conclude that it must be software RAID ? Perhaps it will just suffer just the same under H/W Raid. A higher level question will be, perhaps we have underspec the H/W during sizing.

Luckily enough, we have another similar system that is on H/W Raid. For migration reason, we have that ready for production use and we transfer the system over. Immediately we observed a significant decrease in the I/O pressure. I think the scenario is pretty informative. Until you are really running on systems that are eating your I/O like daily meal, one will not observe the value in H/W Raid.

From your post, you highlighted that you jump to the conclusion and switch to H/W Raid system, and it was the magic bullet that solves your disk I/O issues.
You did not mention that you tried a few other software solutions before deciding to switch to H/W Raid.

So naturally, I assume that you need a lesson in software optimization VS hardware optimization.

Yes, I agree, it is a very very simple SQL statement, which you and uni students should know, but it changes the performance in greater magnitude then a H/W Raid solution.

Do you realise that something so simple (as you claimed) is better then hardware RAID?

So the question is, did you do any software optimization before switching to H/W Raid for your mail system?

Can share your mail architecture?
You have a total of 5 servers?
How many MX records and servers do you have?
Can describe your 'multiple stages' and 'postfix pipeline'?
I am only experienced in Exim pipeline.
Are you using 3rd party filtering like Postini?

I won't tell you corporate IP, but I can tell you, it's a distributed design that mails goes through mulitple perl filters across multiple servers. That doesn't stop the load when under DDOS.

What is your amount of spam VS ham ratio?
Do you process (anti-virus and anti-spam) 100% of your SMTP data in the Postfix pipeline?
What kind of spam can you filter out before reaching SMTP data in the Postfix pipeline?
Have you asked any of these questions before moving to H/W Raid?

If you are so keen to explain to me, perhaps tell me, how do you conclude that the SQL statement is not optimized ? I think this is abit more interesting than showing me something I already knew 10 years ago.
I won't tell you corporate IP, but I can tell you that you can find it on Google and my thread on DBmail's website.

And please do the read this before you call your method a good solution why before we start instructing the query optimiser what to do, what you ought to have done first.
http://www.mysqldiary.com/the-battle-between-force-index-and-the-query-optimizer/

If my method is not a good solution, please enlighten me, what would you propose?

Just to be clear, I did not write DBMail Administrator (DBMA) opensource software.
I just submitted a simple performance bug fix which yield a 51,100% improvement over what hardware RAID performance can do.

Just because of this SQL101 example, you want to illustrate "MOST" SI failed to understand ? I think that's too bold and big a claim it is. I wonder if you have met "MOST" SI to even make this understatement.

Well, as you claimed, there are incompetent IT engineers around, who don't know how to setup and manage software RAID, it is not a stretch to imagine they don't know SQL101.

Or would your claim that "software RAID is too complex a skillset for lowly paid IT engineers" too bold and big? And I also wonder if you have met "MOST" SI to even make this understatement.

Can we conclude?
Software RAID: +Performance +Reliability +Recovery and Portability
Hardware RAID: -Performance -Reliability -Recovery and Portability
 
Last edited:

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,391
Reaction score
1,180
Lolx. Okay lah. Have it your way okay ? I'm too busy with clustering in the cloud and no time and waste explaining stuffs to you which is going in circles. It's already pretty obvious where you are heading. Wish you all the best :)

Be happy with your 51K% SQL101 fix :) If you think it's a good solution, then it's definitely a good solution :) Cheers.

We discuss again when we are on the same league.
 
Last edited:

wallacetan

Member
Joined
May 21, 2000
Messages
129
Reaction score
0
I myself have design and deploy a whole suite of antivirus + antispam engines running a postfix mail server serving like 20+ domains, each day, we have no less than 100+ mails per minute, it can reach much higher during peak hours like during the start of your working day and towards the end of the day. Somehow people like to rush emails before they knock off. I have seen a software raid 1 system, just raid 1, going super highload with really lousy performance on a pair of 10K SAS Harddisk. Now obviously the question is, how do I conclude that it must be software RAID ? Perhaps it will just suffer just the same under H/W Raid. A higher level question will be, perhaps we have underspec the H/W during sizing.

Luckily enough, we have another similar system that is on H/W Raid. For migration reason, we have that ready for production use and we transfer the system over. Immediately we observed a significant decrease in the I/O pressure. I think the scenario is pretty informative. Until you are really running on systems that are eating your I/O like daily meal, one will not observe the value in H/W Raid.

Allow me to say this. What you have shown does not impress me at all. I'm not trying to be difficult or offending,

Lets do a proper sizing:
100+ mails per minute, lets oversize this 10x to 1,000 mails per minute.
A typical email is around 75KByte, lets oversize this 10x to 750KByte per email.

Your email data rate is:
1000 x 750KB = 750MB per minute or 750/60=12.5MB per second

Ultrastar C10K900 - Sustained transfer: 117MB/sec to 198MB/sec

At the worst performance of this 10K SAS Harddisk:
You are using only 12.5/117=10.6%

At the best performance of this 10K SAS Harddisk:
You are using only 12.5/198=6.3%

I wonder why you would ever need H/W Raid when you are only using between 6.3-10.6% of the available data transfer rate? (And I already have oversized both your mails per minute and mail size by a factor of 10)

How is 6.3-10.6% "eating your I/O like daily meal"?

I won't tell you corporate IP, but I can tell you, it's a distributed design that mails goes through mulitple perl filters across multiple servers. That doesn't stop the load when under DDOS.

I wonder why your corporate IP is worth protecting with performance numbers like this?
And why you would need a distributed design for such low volume of emails per minute?
Or have you identified the real bottleneck?

We discuss again when we are on the same league.

But of course, you are better at this then me.
 
Important Forum Advisory Note
This forum is moderated by volunteer moderators who will react only to members' feedback on posts. Moderators are not employees or representatives of HWZ. Forum members and moderators are responsible for their own posts.

Please refer to our Community Guidelines and Standards, Terms of Service and Member T&Cs for more information.
Top