raid set up for lvm

Despite what the thread at: http://www.mail-archive.com/linux-raid@vger.kernel.org/msg07378.html

says, William L. Thomsom Jr. is reporting a real bug.

[root@grouper dev]# fdisk -l /dev/sda | grep sda6 /dev/sda6 2 4865 39070048+ fd Linux raid autodetect [root@grouper dev]# fdisk -l /dev/sdb | grep sdb6 /dev/sdb6 2 4865 39070048+ fd Linux raid autodetect root@grouper dev]# mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sda6 /dev/sdb6 mdadm: /dev/sda6 is too small: 0K

Yet, making ONE adjustment changes things:

[root@grouper dev]# fdisk -l /dev/sdb | grep sdb6 /dev/sdb6 2 4865 39070048+ fd Linux raid autodetect [root@grouper dev]# fdisk -l /dev/sda | grep sd.6 /dev/sda6 2 4865 39070048+ 83 Linux [root@grouper dev]# mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sda6 /dev/sdb6 mdadm: size set to 39069952K mdadm: array /dev/md1 started.

Curiously, that’s all I had to do. Subsequent partitions (I had 8 of them to raid), just worked. Likely this is a bug in the kernel, where once one partition has been used, it winds up caching the disk label, and therefore some other effect no longer occurs.

Oh:

[root@grouper dev]# uname -a Linux grouper.sandelman.ca 2.6.18-1.2239.fc5xen0 #1 SMP Fri Nov 10 13:58:27 EST 2006 i686 i686 i386 GNU/Linux

Why would I make 8 40G partitions on each of a pair of 320G disks? And then raid pairs, and then add them all to an LVM?

Well… imagine that I get some bad sectors somewhere. That means that my RAID will get degraded to single disk. If I want to continue working, and not replace the disk immediately, it turns out that I can find another 40G on another platter rather easily, and raid that together instead of the piece that I lost.

Then I’ll look at SMART and other info, and find out if the disk is really dying, and try to get it replaced, but in the meantime, I feel more safe.

If, while waiting for the replacement disk, I got a second failure, in the other disk (both disks are, same vendor, and probably sequential out of the assembly line), then I’d loose everything… unless I can just degrade part of each disk, and keep working.

[root@grouper dev]# mdadm --create --verbose /dev/md7 --level=mirror --raid-devices=2 /dev/sda7 /dev/sdb7 mdadm: size set to 39069952K mdadm: array /dev/md7 started. [root@grouper dev]# mdadm --create --verbose /dev/md8 --level=mirror --raid-devices=2 /dev/sda8 /dev/sdb8 mdadm: size set to 39069952K mdadm: array /dev/md8 started. [root@grouper dev]# mdadm --create --verbose /dev/md9 --level=mirror --raid-devices=2 /dev/sda9 /dev/sdb9 mdadm: size set to 39069952K mdadm: array /dev/md9 started. [root@grouper dev]# mdadm --create --verbose /dev/md10 --level=mirror --raid-devices=2 /dev/sda10 /dev/sdb10 mdadm: size set to 39069952K mdadm: array /dev/md10 started. [root@grouper dev]# mdadm --create --verbose /dev/md11 --level=mirror --raid-devices=2 /dev/sda11 /dev/sdb11 mdadm: size set to 39069952K mdadm: array /dev/md11 started. [root@grouper dev]# mdadm --create --verbose /dev/md12 --level=mirror --raid-devices=2 /dev/sda12 /dev/sdb12 mdadm: size set to 39069952K mdadm: array /dev/md12 started. [root@grouper dev]# mdadm --create --verbose /dev/md13 --level=mirror --raid-devices=2 /dev/sda13 /dev/sdb13 mdadm: size set to 39069952K mdadm: array /dev/md13 started. [root@grouper dev]# pvcreate /dev/md1 Physical volume "/dev/md1" successfully created [root@grouper dev]# pvcreate /dev/md7 Physical volume "/dev/md7" successfully created [root@grouper dev]# pvcreate /dev/md8 Physical volume "/dev/md8" successfully created [root@grouper dev]# pvcreate /dev/md9 Physical volume "/dev/md9" successfully created [root@grouper dev]# pvcreate /dev/md10 Physical volume "/dev/md10" successfully created [root@grouper dev]# pvcreate /dev/md11 Physical volume "/dev/md11" successfully created [root@grouper dev]# pvcreate /dev/md12 Physical volume "/dev/md12" successfully created [root@grouper dev]# pvcreate /dev/md13 Physical volume "/dev/md13" successfully created [root@grouper dev]# vgcreate Grouper1 /dev/md1 /dev/md7 /dev/md8 /dev/md9 /dev/md10 /dev/md11 /dev/md12 /dev/md13 Volume group "Grouper1" successfully created [root@grouper dev]# vgs VG #PV #LV #SN Attr VSize VFree Grouper1 8 0 0 wz--n- 298.06G 298.06G VolGroup00 1 24 0 wz--n- 76.25G 6.16G [root@grouper dev]# pvs PV VG Fmt Attr PSize PFree /dev/md0 VolGroup00 lvm2 a- 76.25G 6.16G /dev/md1 Grouper1 lvm2 a- 37.26G 37.26G /dev/md10 Grouper1 lvm2 a- 37.26G 37.26G /dev/md11 Grouper1 lvm2 a- 37.26G 37.26G /dev/md12 Grouper1 lvm2 a- 37.26G 37.26G /dev/md13 Grouper1 lvm2 a- 37.26G 37.26G /dev/md7 Grouper1 lvm2 a- 37.26G 37.26G /dev/md8 Grouper1 lvm2 a- 37.26G 37.26G /dev/md9 Grouper1 lvm2 a- 37.26G 37.26G