Software vs Hardware raid

ZrE0_Cha0s

Arch-Supremacy Member
Joined
Jun 21, 2011
Messages
16,704
Reaction score
26
What are the pros and cons of hardware raid vs software raid???

Which would give me the best choice of redundancy and performance, hardware or software raid?
I am thinking about using raid 0. As it seems to give me the speed of raid 0. As I understand things if 1 drive fails the raid setup will lost all my files until new HDD is added.
Will either sw or hw raid give me the option to move raid to new pc with data intact of something like mobo fails???

Some sources stated that Home user mostly use SW raid and company that have server using HW raid.

But for me as a home user which one should I go for???

oh ya i forget to add on this is that I am using the RAID 0 system for a boot drive.
 
Last edited:

wind77

Senior Member
Joined
Jan 8, 2002
Messages
1,544
Reaction score
0
What are the pros and cons of hardware raid vs software raid???

Which would give me the best choice of redundancy and performance, hardware or software raid?
I am thinking about using raid 0. As it seems to give me the speed of raid 0. As I understand things if 1 drive fails the raid setup will lost all my files until new HDD is added.
Will either sw or hw raid give me the option to move raid to new pc with data intact of something like mobo fails???

Some sources stated that Home user mostly use SW raid and company that have server using HW raid.

But for me as a home user which one should I go for???

oh ya i forget to add on this is that I am using the RAID 0 system for a boot drive.
from ur title, i google, and found the following:
https://www.google.com/search?q=Software+vs+Hardware+raid&ie=utf-8&oe=utf-8&aq=t
the first link already gave me the ans.
surely ur isp doesn't block google?
or u got problem with "g" "o" "l" "e" keys on ur keyboard?
 

ZrE0_Cha0s

Arch-Supremacy Member
Joined
Jun 21, 2011
Messages
16,704
Reaction score
26
from ur title, i google, and found the following:
https://www.google.com/search?q=Software+vs+Hardware+raid&ie=utf-8&oe=utf-8&aq=t
the first link already gave me the ans.
surely ur isp doesn't block google?
or u got problem with "g" "o" "l" "e" keys on ur keyboard?

lol...of cos i know and my isp doesn't block google....but i wanted more details for my setup...yes i also am doing my research too...but want some more info b4 i could use which method to execute the my planning for Raid sys
 

i1magic

Arch-Supremacy Member
Joined
Dec 11, 2002
Messages
22,365
Reaction score
117
What are the pros and cons of hardware raid vs software raid???

Which would give me the best choice of redundancy and performance, hardware or software raid?
I am thinking about using raid 0. As it seems to give me the speed of raid 0. As I understand things if 1 drive fails the raid setup will lost all my files until new HDD is added.
Will either sw or hw raid give me the option to move raid to new pc with data intact of something like mobo fails???

Some sources stated that Home user mostly use SW raid and company that have server using HW raid.

But for me as a home user which one should I go for???

oh ya i forget to add on this is that I am using the RAID 0 system for a boot drive.

Hi,

I am no expert ... but let me share my experience.

For RAID 0 .... if 1 of the harddisk fails ... you will lose all the data on it straight away. Even after you replaced the spoilt harddisk .... you will NOT get the data back.

Personally, I have never tried those "hardware RAID". What I have tried is the Intel RAID Storage Technology (aka IRST), available on most motherboard. What I like about it is that .... in the event of failure ... the harddisk can be plugged into almost ANY PC and data can be read off it straight away.

But if we were to run HD Tune Pro ... it is unable to "see" the harddisk in the RAID configuration.

Then ... I found out about Windows 7 RAID.

So I tried it. Basically, it worked pretty much the same as IRST but we can see the harddisk separately under HD Tune Pro.

Speed wise for both seems to be the same. I didn't experience any lag.

My suggestion is ... try the above 2 ... if not good enough .. then try other solution.
 

goenitz33

Arch-Supremacy Member
Joined
Sep 5, 2007
Messages
15,352
Reaction score
2
seriously taking into account the risk of going RAID 0, it is not really worth it for a home user.

would you really benefit alot from the slight increase in speed?

i would presume u intend to do RAID 0 using 2 SSDs.
 

hanlsnx

Senior Member
Joined
May 1, 2009
Messages
1,392
Reaction score
0
Rebuilding a raid on a different machine is also impossible for intel raid... without a simple move the entire setup method... you have to ghost and backup rebuild a raid and put back all your data is just clumsy...

Also hardware raid there is no CPU overhead for a server architecture it can be significant. But in any case if intel raid has progressed i dont see why not if you doing just 1 and 0... not anything like 5 or 5+0
 

ZrE0_Cha0s

Arch-Supremacy Member
Joined
Jun 21, 2011
Messages
16,704
Reaction score
26
ern dun wry about the data...anyway I am not using it as a data nor going to rebuild my Raid 0 it is just that I wanted to try out some raid system for ssd and hdd...:o
using another setup for my raid system...
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,497
Reaction score
1,255
What are the pros and cons of hardware raid vs software raid???

Which would give me the best choice of redundancy and performance, hardware or software raid?
I am thinking about using raid 0. As it seems to give me the speed of raid 0. As I understand things if 1 drive fails the raid setup will lost all my files until new HDD is added.
Will either sw or hw raid give me the option to move raid to new pc with data intact of something like mobo fails???

Some sources stated that Home user mostly use SW raid and company that have server using HW raid.

But for me as a home user which one should I go for???

oh ya i forget to add on this is that I am using the RAID 0 system for a boot drive.

One thing between SW and HW Raid, you can't just compare them without understand how you are going to deploy them.

The greatest advantage to H/W Raid is often on the following features which are mostly not meant for consumers

  • Performance: This is the only one that is most relevant to consumers. Recent consumers demand alot on the performance. H/W Raid offload the computation required for RAID. More Raid 1 and 0 are much less computationally intensive than Raid 5 and 6. Hence the benefit is less if you choose Raid 0, 1 and their variants.
  • External interfaces: H/W Raid offers end-users the opportunity to connect external devices normally via backplanes. Such feature is mostly not required by consumers and meant to cater the Enterprise users. With an expander, such as SAS Expander that comes with an external enclosure, it can potentially goes up to 128 devices.
  • Battery backed cache: Unless you are using a server class mainboard, chances you will not get Battery cached cache on the RAID components. Enterprise H/W Raid offerings often support Battery Backed Cache which allow for data written from the OS but not written into the harddisk be kept residual in the cache modules until main power comes back. Such is features to allow maximum performance from the storage array and still able to protect against power failure scenario.

Having systems nowadays are alot more powerful than 10 years back, the performance margin between S/W and H/W Raid have diminished, but still H/W Raid do excel in parity calculation for large number of disks array.

There are some articles that I have read that choose S/W raid over H/W raid. But some are not broad enough in my opinion. The risk of flexibility and redundancies are only on the storage components aspect and did not look into higher level of redundancies like multiple systems failover, multiple H/W cards manual backup and normally lower operational issues when coming to configuration and maintenance. It requires more skillset to handle software configurational changes as oppose to less complicated and less error-prone configuration changes in the H/W Raid's firmware, which are more or less fixed and requires less education to system engineers. Not all system engineers are proficient unix or windows engineers.

However I have to admit the cost on the storage domain when using H/W Raid is definitely higher when you solely look at storage components. You need to be able to find an exact H/W Raid card replacement in failure events, provided it is even possible to shift the arrays over without rebuilding (not all RAID cards offers such feature). You need to backup constantly which mean you need a good backup infrastructure. You probably need an extra system running side by side for high availability in case of system failure. The cost is high, but if you measure against the skill sets requires to hire better system engineers, and more complicated S/W Raid scenarios, the cost quickly amortise against all components and becomes marginally or maybe better less than S/W Raid on long run. One year worth of a good system engineer is worth more than a couple good H/W Raid cards, and is one system engineer sufficient ?

Bottomline is the comparison between H/W Raid and S/W Raid should be considered in broader terms use one for best fit scenarios. Consumers are better off in S/W Raid, but end-user need to be savvy to handle it properly. H/W Raid are straightforward and better performant but requires a higher upfront cost.

As home user, if your intention is just performance, your recent option is go get a SSD. You will get more performance than Raid a pair of magnetic hard disk via RAID 0 with much higher risks and complexity. Besides the performance you get out of RAID 0 is not ideally good depending on what your usage patterns are on the storage system. Those benchmarks are geared on telling you on how better are specific RAID, but doesn't really fit into your use case most of the time. Just best guess gauge in my opinion.

How many consumer end-users will reach DQ=32 during 90% of their daily use cases ? If you can reach DQ=5, it will be super rare for most. Common DQ for most consumers are between 0 and 1, especially for laptop users.
 
Last edited:

cscs3

Arch-Supremacy Member
Joined
Jun 4, 2000
Messages
21,697
Reaction score
125
seriously taking into account the risk of going RAID 0, it is not really worth it for a home user.

would you really benefit alot from the slight increase in speed?

i would presume u intend to do RAID 0 using 2 SSDs.

RAID 0 is usually use in the old day when disk capacity is small and slow in transfer rate. hardly need to use it now a day.
 

cscs3

Arch-Supremacy Member
Joined
Jun 4, 2000
Messages
21,697
Reaction score
125
One thing between SW and HW Raid, you can't just compare them without understand how you are going to deploy them.

The greatest advantage to H/W Raid is often on the following features which are mostly not meant for consumers

  • Performance: This is the only one that is most relevant to consumers. Recent consumers demand alot on the performance. H/W Raid offload the computation required for RAID. More Raid 1 and 0 are much less computationally intensive than Raid 5 and 6. Hence the benefit is less if you choose Raid 0, 1 and their variants.
  • External interfaces: H/W Raid offers end-users the opportunity to connect external devices normally via backplanes. Such feature is mostly not required by consumers and meant to cater the Enterprise users. With an expander, such as SAS Expander that comes with an external enclosure, it can potentially goes up to 128 devices.
  • Battery backed cache: Unless you are using a server class mainboard, chances you will not get Battery cached cache on the RAID components. Enterprise H/W Raid offerings often support Battery Backed Cache which allow for data written from the OS but not written into the harddisk be kept residual in the cache modules until main power comes back. Such is features to allow maximum performance from the storage array and still able to protect against power failure scenario.

Having systems nowadays are alot more powerful than 10 years back, the performance margin between S/W and H/W Raid have diminished, but still H/W Raid do excel in parity calculation for large number of disks array.

There are some articles that I have read that choose S/W raid over H/W raid. But some are not broad enough in my opinion. The risk of flexibility and redundancies are only on the storage components aspect and did not look into higher level of redundancies like multiple systems failover, multiple H/W cards manual backup and normally lower operational issues when coming to configuration and maintenance. It requires more skillset to handle software configurational changes as oppose to less complicated and less error-prone configuration changes in the H/W Raid's firmware, which are more or less fixed and requires less education to system engineers. Not all system engineers are proficient unix or windows engineers.

However I have to admit the cost on the storage domain when using H/W Raid is definitely higher when you solely look at storage components. You need to be able to find an exact H/W Raid card replacement in failure events, provided it is even possible to shift the arrays over without rebuilding (not all RAID cards offers such feature). You need to backup constantly which mean you need a good backup infrastructure. You probably need an extra system running side by side for high availability in case of system failure. The cost is high, but if you measure against the skill sets requires to hire better system engineers, and more complicated S/W Raid scenarios, the cost quickly amortise against all components and becomes marginally or maybe better less than S/W Raid on long run. One year worth of a good system engineer is worth more than a couple good H/W Raid cards, and is one system engineer sufficient ?

Bottomline is the comparison between H/W Raid and S/W Raid should be considered in broader terms use one for best fit scenarios. Consumers are better off in S/W Raid, but end-user need to be savvy to handle it properly. H/W Raid are straightforward and better performant but requires a higher upfront cost.

As home user, if your intention is just performance, your recent option is go get a SSD. You will get more performance than Raid a pair of magnetic hard disk via RAID 0 with much higher risks and complexity. Besides the performance you get out of RAID 0 is not ideally good depending on what your usage patterns are on the storage system. Those benchmarks are geared on telling you on how better are specific RAID, but doesn't really fit into your use case most of the time. Just best guess gauge in my opinion.

How many consumer end-users will reach DQ=32 during 90% of their daily use cases ? If you can reach DQ=5, it will be super rare for most. Common DQ for most consumers are between 0 and 1, especially for laptop users.

If you are referring to consumer market and not in IT industry. I am quite sute the model changed frequently and old model quicky went out of support and even the vendor may not have a spare part for repair in case of the hardware failed. So those home user using hardware raid is really not raid protected if the hardware (other then the disk) failed.

As many Raid device do come with some kind of NAS function. many consumer is there to run the device 24 horus without powering it off.
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,497
Reaction score
1,255
If you are referring to consumer market and not in IT industry. I am quite sute the model changed frequently and old model quicky went out of support and even the vendor may not have a spare part for repair in case of the hardware failed. So those home user using hardware raid is really not raid protected if the hardware (other then the disk) failed.

As many Raid device do come with some kind of NAS function. many consumer is there to run the device 24 horus without powering it off.

That's quite true. That's why I don't really recommend H/W Raid to consumers unless they are savvy in this area and also conscious about such issues.

When you start your 2nd paragraph, I wonder if we are on the same page. While TS original post didn't mentioned exactly, when did H/W Raid becomes NAS ? Which H/W Raid offers NAS functionality ? I wonder.

If somehow we are mixing NAS with H/W Raid, I think it is a totally different topic altogether. My argument to H/W Raid is on latest technology, PCI-E H/W Raid Card. This is not NAS. Next is most NAS are not dedicated H/W Raid at all. They are merely software raid in a well designed dedicated enclosure running either low powered Intel chipset, ARMs chipset or some other dedicated chipset. Most of them run Linux derivatives via md drivers managed via mdadm.

Should our discussion is on NAS, I am one who side towards running NAS 24 by 7 versus something that starts and stop daily. Without saying, going 24 by 7 have a higher cost seemingly. Electronics are something in my opinion running stable on long run as more efficient than starting and stop regularly. They are machines, not flesh and blood, hence they work better running at stable temperatures and continuously versus frequently power up and down.

On whole, I observe portable electronics frequently damage, either during to vibration or careless handling, systems that run 24by7 seems to have a longer lifespan versus one that hardly starts up. Should you have chosen a good balance between power and speed, it's not a high cost of less than $20 per month to drive a NAS 24by7. I have been doing that for more than a year and my green harddisk and NAS are working durability fine. With a good small UPS to prevent sudden shutdown of the NAS also brings about stable power supply. Versus frequently observing some looking for data recovery services, I rather feed something that run durably with less than $20/month.

I believe with proper calculation, there are benefits with such approach.
 
Last edited:

i1magic

Arch-Supremacy Member
Joined
Dec 11, 2002
Messages
22,365
Reaction score
117
Rebuilding a raid on a different machine is also impossible for intel raid... without a simple move the entire setup method... you have to ghost and backup rebuild a raid and put back all your data is just clumsy...

Also hardware raid there is no CPU overhead for a server architecture it can be significant. But in any case if intel raid has progressed i dont see why not if you doing just 1 and 0... not anything like 5 or 5+0

I took out one of the harddisk from my RAID1 setup .... and put it into another PC ... and I was able to read all the data from that hard disk. But when I put it back .... I think the system consider it as a failure replacement and rebuilds.

So I don't understand what you mean about "simple move", "ghost", etc.

Yes ... I agree totally on the "no CPU overheads" part. And yes .. especially for a file server for many users ... the overheads will be significant.

But I reckon TS is using it for home. So frankly for home ... I don't think the "overheads" will be as substantial. And from some review ... the test results show that for typical home use environment ... the speed for software and hardware RAID is pretty much the same.

Like I said .. I am not expert ... just my POV.
 

Dr.KongHee

High Supremacy Member
Joined
Jul 11, 2012
Messages
35,870
Reaction score
3
hardware raid, to go for raid 5 and raid 6
rest can forget about it. if you plan to raid 0 i think onboard faster.
 

limited

Member
Joined
Aug 30, 2001
Messages
308
Reaction score
0
Well unless you really need the extra write speed in raid 0, otherwise I would think you're better of using the disk individually.

I'm just curious what is your concern, why intending to RAID?
If speed, then use SSD. In my opinion, if its for backing up data, then it still makes sense. =]
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,497
Reaction score
1,255
some opinion...swr seem like you can create multiple volumes, or even allow you to expand in the future,add in additional hdd or upgrage to higher raid without reformat any data.


hwr usually have a button on the enclosure, once you configure the raid,say raid 1 that mean forever raid 1.if want to change that mean will need to reformat again.

Having an enclosure is not Hardware Raid. Please don't be mistaken. Also your understanding about Raid migration is incorrect. Read up more.
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,497
Reaction score
1,255
the enclosure inside should have a raid controller right? so I call that hwr...

if using swr usually control by a software something like that...

those just some of my experiense ...

Nope NAS doesn't need a H/W Raid Controller to offer RAID feature. Just like any desktop or server you can get, they can utilise S/W RAID bundle with the OS kernel. For large number of consumer grade NAS, only S/W Raid is used. I have yet to come across any consumer grade NAS uses H/W Raid. Even highly prices consumer NAS like QNAP and Synology are using S/W Raid.

When the industry refer to H/W Raid, it refers to a H/W card that dedicate all its resources to perform RAID across the disks attached to it directly. We also have server class motherboards that offer H/W RAID via a daughter board, but these are less common. Most enterprise setup will have a separate H/W Raid card.

Host-based RAID have certain functionalities offloaded to the processor via S/W drivers. These while require a separate H/W card, do not offer pure H/W Raid functionality as listed above. Depending on price range, most entry level Hostbased H/W Raid only provide interfaces, with higher price ones perform parity offloading, but some of its functionality still done by the motherboard processor via drivers.
 
Important Forum Advisory Note
This forum is moderated by volunteer moderators who will react only to members' feedback on posts. Moderators are not employees or representatives of HWZ. Forum members and moderators are responsible for their own posts.

Please refer to our Community Guidelines and Standards, Terms of Service and Member T&Cs for more information.
Top