Let's assume we have a 3 disk s/w raid-5. Everything is fine, but I poweroff the host, re-arrange the disks in such a way that sda become sdb and sdc become sda. Now if I boot the host, will RAID be still intact.?
The reason why I ask this is :- 1.) I assume when RAID is built, the devices are tagged by Udev, so raid will be still intact 2.) My first question on LE :).
asked 21 Apr '10, 10:50
I believe that the metadata about Linux Raid partitions should coach mdadm about how the partitions fit together in the raid. If you, for example, recable your disks and plug them into different sata ports, Udev and mdadm should be smart enough to use the UUIDs listed in the partitions to reassemble the devices. In fact, I believe Udev tries to find the device serial numbers (or similar information) to place devices back to their original /dev/sd? name. However, I believe that mdadm does not value the /dev/sd? names with as much importance as the UUIDs in the partitions.
If your hard drives do re-arrange if re-cabled, or if they come up in different orders depending on sunspots on the left or right side of the moon and other boot timing issues, you can force device labels by configuring them in /etc/modprobe.d/.
Of course, please don't neglect to back up your data. I've seen many raid volumes fail...easily two or three raid volumes fail a year where I work.
answered 11 May '10, 07:04
I got into that situation ... although on RAID1 sets. And it's not testing re-arranging devices but an actual failure; and it proved that RAID sets got correctly paired despite a change in device IDs.
I had 2 RAID1 pairs. One disk on set 1 failed so I had it pulled out because the continuous retries (on a media error) slowed down the system. On power-up and with the faulty HD off the bus, the devices got renumbered but set 2 was still correctly paired and working despite the new device IDs. The system worked fine on a degraded set 1 and users are happy with system response back to normal :)
When the replacement was put in, the devices got renumbered again; the new device was 'hot-added' to the degraded RAID1 set ... set 2 still worked perfectly.
Would RAID5 be any different? I don't think so except that you have to write GRUB/LILO on each RAID member just in case it gets to be the boot drive after device IDs are changed. After doing that, on boot, the system will just look for the correct RAID sets.
answered 11 May '10, 19:26
This may answer your questions clear:
RAID Level 5
Common Name(s): RAID 5.
Technique(s) Used: Block-level striping with distributed parity.
Description: One of the most popular RAID levels, RAID 5 stripes both data and parity information across three or more drives. It is similar to RAID 4 except that it exchanges the dedicated parity drive for a distributed parity algorithm, writing data and parity blocks across all the drives in the array. This removes the "bottleneck" that the dedicated parity drive represents, improving write performance slightly and allowing somewhat better parallelism in a multiple-transaction environment, though the overhead necessary in dealing with the parity continues to bog down writes. Fault tolerance is maintained by ensuring that the parity information for any given block of data is placed on a drive separate from those used to store the data itself. The performance of a RAID 5 array can be "adjusted" by trying different stripe sizes until one is found that is well-matched to the application being used.
Information obtained from:
answered 09 May '10, 19:59
RAID 5 is really RAID 3 with Parity. ("Parity" basically meaning a backup drive for redundancy. RAID 3 is just 3 drives striped without redundancy. Add another drive for parity (and set it up as RAID 5) and that's what you get, RAID 5.)
RAID 5 has some inherent read/write issues, so I would suggest dual parity, which I believe would be RAID 6 (RAID 6 is RAID 5 + 1 more Parity drive.)
If you need speed, RAID 3, but no redundancy. If you need redundancy, RAID 5 (or better yet 6); but you can also get speed and redundancy with RAOD 0+1 (4 drives in 2 pairs,, each pair striped and then one pair mirrored to the other pair); but that's not as flexible as other RAID setups. (GREAT for a nice desktop machine though, but a bit overkill too I suppose for that.)
Anyway..... as long as you use RAID 0+1, 5,6, 10, 53, you should be fine with mdadm. If you are using hot-swappable SCSI drives, all the better. the RAID should be easy to rebuild with a few keystrokes.
ALWAYS use software RAID and NEVER hardware RAID because if the motherboard or proc goes, your drives and data will become inaccessible under hardware RAID unless you replace the faulty hardware with the exact same make, model, firmware, revision, drivers, blah blah blah etc etc etc.
answered 11 May '10, 20:29