Adding a RAID 1 To An Existing Disk With Data


Hi all, after googling lots of sites, and countless hours of research, with many bad results, I decided to take everything I have found and try to make create a real scenario using one of my servers to create a RAID 1 /MIRROR for an active disk.

It is the actual accounts of my server and the adding of an extra disk to create a, what I call,
“On The fly” RAID 1 / Mirroring I hope it is of value and hoping to save Linux / CentOS v6.5 new comer’s some hard times.

——————————————————————————–
NOTE:    Everything has to be done as root:
su
enter root password

In this example the initial layout for the hard disks was:
Disk with installed Linux CentOS v6.5 with 2TB SDA “Original”, adding 2TB SDB “New” for the mirror.
——————————————————————————–
Device             Mountpoint
/dev/sda1         “/boot”
/dev/sda2         “/”
/dev/sda3         “/var”
/dev/sda4         “/home”

We will be adding the hard disk: /dev/sdb, as our “New Target Disk”, for the mirror.
——————————————————————————– (Table Of Contents)
First and foremost
“BACK EVERYTHING YOU REQUIRE UP !”
(as you may need to retrieve your data if or after you crash the conversion, trust me!),
NOTE:     I use Clonezilla, and SystemRescueCD        otherwise continue “BRAVELY!”
————————————-
2. Verify Your Backup !
Verify Your Backup, make sure it works!!!
————————————-
3. Create the identical partitions
Create the partitions on /dev/sdb identical to the partitions on /dev/sda:
sfdisk -d /dev/sda | sfdisk /dev/sdb
Centos 6.5 returned me this error:
sfdisk: I don’t like these partitions – nothing changed.
(If you really want this, use the –force option.)
NOTE:     If you get the following error you will have to use the “–force” option at the end of the command,
then it will be forced to create the dump of sda to sdb
sfdisk -d /dev/sda | sfdisk /dev/sdb –force
or
sfdisk -d /dev/sda | sfdisk /dev/sdb -f
Checking that no-one is using this disk right now …
OK
————————————-
4. Load kernel modules required
We’ll need to load a few kernel modules, if not loaded already, (to avoid a reboot):
modprobe linear
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid10

NOTE:     Make sure you have installed the xfs* utilities. (yum -y install xfs*), if you plan on using the XFS file system.
————————————-
5. Retrieve information using the new modules.
Once you finish loading the kernel modules,
now we’ll run:
cat /proc/mdstat
The output should look similar to the following:
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
unused devices: <none>
Here we see that the RAID kernel modules are now working, but there are no RAID sets yet.
——————————————————————————– (Table Of Contents)
6. So let’s start,
NOTE #1:
That this is my partition configuration check yours and replace the partitions as appropriate to suit your requirements.
NOTE #2:
If you want to use Grub 0.97 (aka: “GRUB 1”, default in CentOS 5.x or 6.x) on RAID 1, you need to specify an older version of metadata than the default. Add the option “–metadata=0.90” to the above command. Otherwise Grub will respond with “file system type unknown, partition type 0xfd” and refuse to install. This is supposedly not necessary with Grub 2.
NOTE #3:
You can name the md’s any number you like between 0 – 100. I don’t use number greater than this, as I have noticed that when the system automounts md’s it usually starts with 127.
Whilst I don’t really know what the actual max value is, if you stick to something sensible you’ll be OK. Most start with 0, but I find that a bit confusing as I like to have a reference as to which partitions I am mirroring, again this is pure personal choice.
Running the following commands:
I will create “–metadata=0.90” versions of Raid1 Mirrors, I prefer this as it is compatible with “ext2” partitions.
If you have all “ext4” partitions, you can just use the command without “–metadata=0.90” in the command line. I am going to also add labels to the command to save me doing it later.
————————————————————————————————————————————————–
Example:
(/boot) without
mdadm –create /dev/md1 –name=boot –level=1 –raid-disks=2 /dev/sdb1 missing
(/boot) with
mdadm –create /dev/md1 –metadata=0.90 –name=boot –level=1 –raid-disks=2 /dev/sdb1 missing
————————————————————————————————————————————————–
(/boot)
mdadm –create /dev/md1 –metadata=0.90 –name=boot –level=1 –raid-disks=2 /dev/sdb1 missing
(/root)
mdadm –create /dev/md2 –metadata=0.90 –name=root –level=1 –raid-disks=2 /dev/sdb2 missing
(/var)
mdadm –create /dev/md3 –metadata=0.90 –name=var –level=1 –raid-disks=2 /dev/sdb3 missing
(/home)
mdadm –create /dev/md9 –metadata=0.90 –name=home –level=1 –raid-disks=2 /dev/sdb4 missing

NOTE:     “md9”, I called it this, as this is going to eventually become a different partition for me, but I currently its set as “/home”, either way I will still create it.
I’ll make my changes after I move the “/home”, to a different larger partition.
After every command you should see an output on your screen similar to this:
mdadm –create /dev/md9 –metadata=0.90 –name=home –level=1 –raid-disks=2 /dev/sdb4 missing
mdadm: array /dev/md9 started

You have mail in /var/spool/mail/root
This now completes the creation of the “md’s” 1 to 4, in a degenerated state as if the “sdb” drive is missing.
——————————————————————————– (Table Of Contents)
7. Checking the output of the md state.
Now let’s check the output of the md state, it should look similar to this;
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]

md1 : active raid1 sdb1[0]
1048512 blocks [2/1] [U_]
md2 : active raid1 sdb2[0]
409599936 blocks [2/1] [U_]
md3 : active raid1 sdb3[0]
767999936 blocks [2/1] [U_]
md9 : active raid1 sdb4[0]
767999936 blocks [2/1] [U_]
unused devices: <none>
————————————-
8. Creating our “mdadm.conf” file.
Ok all looks good, so let’s create our “mdadm.conf” from our current configuration:
mdadm –detail –scan > /etc/mdadm.conf
————————————-
9. Checking the newly created mdadm.conf contents.
Let’s look at the newly created mdadm.conf file and see the contents of that file:
cat /etc/mdadm.conf
ARRAY /dev/md1 metadata=0.90 UUID=720be9e1:7c5fc9bf:fb12f413:e44a9204
ARRAY /dev/md2 metadata=0.90 UUID=a1f08519:c406ec5b:fb12f413:e44a9204
ARRAY /dev/md3 metadata=0.90 UUID=e31f5895:94b43a91:fb12f413:e44a9204
ARRAY /dev/md9 metadata=0.90 UUID=b08df225:213d2248:fb12f413:e44a9204
NOTE: Make a copy of, or copy & paste into a text file, these “UUID’s.  You will need these “UUID’s” later.
————————————-
10. Creating the new “initramfs” images.
Next, we have to create new “initramfs” images:
To do this we will use “dracut” to rebuild the initramfs with the new mdadm.conf configurations:

NOTE:     First lets rename the current kernel to “.old”, or if you’re like me, “.old-pre-raid”.
mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old-pre-raid

Now create the new images:
dracut –mdadmconf –force /boot/initramfs-$(uname -r).img $(uname -r)

Ok, once that’s done we are ready with our new raid enabled current running version kernel.
——————————————————————————– (Table Of Contents)
11. Creating the file systems on new MD’s.
Now comes the fun part, creating the file systems on these new software raid devices:

NOTE:     XFS file system for “/boot” & “/”, is only supported in CentOS v7.x and above as far as I am aware,
but the rest can be done using the XFS file system if you like.  Make sure you have the xfs* utilities     installed. If not check with (yum -y install xfs*), if you can’t find them, then you will need to add the “ELEP repo” for the CentOS version you are using.. Google… “Additional Centos 6/7 Repo’s”
Run the following bare commands to create the “md’s” file systems, (fancy stuff like labels):

mkfs.ext2 /dev/md1         (for “/boot” “ext2” is good)
(Example of what you should see happening, once command is run, “ext2 / ext4 / xfs”, have similar outputs, just lookout for any errors).
“ext2 output”
mke2fs 1.41.12 (17-May-2010)
file system label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1 blocks, Stripe width=0 blocks
65536 inodes, 262128 blocks
13106 blocks (5.00%) reserved for the super user
First data block=0
Maximum file system blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
mkfs.ext4 /dev/md2         (for “/root” “ext4”     is good)
mkfs.ext4 /dev/md3         (for “/var” “ext4 or xfs”     is good)
mkfs.ext4 /dev/md9         (for “/home” “ext4 or xfs” is good)
NOTE:    (example for using “ xfs” as your file system)
mkfs.xfs /dev/md3         (for “/var” using “xfs”)
mkfs.xfs /dev/md9         (for/home” using “ xfs”)

“Xfs Output”
mkfs.xfs /dev/md3
meta-data=/dev/md3               isize=256    agcount=32, agsize=6000000 blks
=                       sectsz=4096  attr=2, projid32bit=0
data     =                       bsize=4096   blocks=191999984, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=93749, version=2
=                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

If you wish, you can create mirror swap partitions also, using the command below.
These days I personally don’t use swap partitions, I have adopted the “swapfile”, consept.so I will skip this.
(you can find more on “swapfiles” using the link in the acknowledgements).
NOTE:     If you want swap partitions on both drives for performance, use the following command.
mkswap -c /dev/sda2
——————————————————————————– (Table Of Contents)
12. Copy the data from the existing (and still running) partitions
Copy the data from the existing (and still running) partitions to the newly created raid partitions:
(please be patient as these copy steps can take some time to complete, depending on the amount of data your disk have)
NOTE: I have a habit of creating mounts for each of the raid MD’s, that way I keep some system happening so I don’t override any data I transfer. (eg: “/mnt/raidmd1”, “/mnt/raidmd2”,“/mnt/raidmd3”,“/mnt/raidmd9”)
MD1 (/boot)
mkdir /mnt/raidmd1
mount /dev/md1 /mnt/raidmd1
cd /boot; find . -depth | cpio -pmd /mnt/raidmd1
NOTE:    If SELinux is in use also do this before continuing with the next steps: “touch /mnt/raid/.autorelabel”.
Sync
umount /mnt/raidmd1
MD2 (/)
mkdir /mnt/raidmd2
mount /dev/md2 /mnt/raidmd2
cd / ; find . -depth -xdev | grep -v ‘^\./tmp/’ | cpio -pmd /mnt/raidmd2
sync
umount /mnt/raidmd2
NOTE:     As we really do not want to copy the files inside the “/tmp” and “/var/tmp” mounts,
the following is a command that will create empty mount points like ‘proc’ or ‘dev’ and will not forget things like /.autofsck.
MD3 (/var)
mkdir /mnt/raidmd3
mount /dev/md3 /mnt/raidmd3
cd /var; find . -depth | cpio -pmd /mnt/raidmd3
sync
umount /mnt/ raidmd3
MD9 (/home)
mkdir /mnt/raidmd9
mount /dev/md9 /mnt/raidmd9
cd /home; find . -depth | cpio -pmd /mnt/raidmd9
sync
umount /mnt/raidmd9
——————————————————————————– (Table Of Contents)
13. Getting the new MD’s UUID’s
Open another console window and run:
blkid | grep /dev/md
Here you will see the UUID for each md type file system. It should look something like this:
/dev/md1: LABEL=”boot” UUID=”ecb6f984-bcbc-4976-ab6f-713a43e694c2″ TYPE=”ext2″
/dev/md2: LABEL=”root” UUID=”8c9db010-287b-4c67-82af-665a8ec28625″ TYPE=”ext4″
/dev/md3: LABEL=”var” UUID=”239d520c-ba59-471a-b2f1-a352acc7727a” TYPE=”xfs”
/dev/md9: LABEL=”home” UUID=”01abc9e9-7b57-47d2-aac8-13a5cd7568f8″ TYPE=”xfs”

Make a copy of the UUID for /dev/md1 to /dev/md9, (I make copies into a txt file on my desktop)
NOTE:     Pay careful attention to the commented lines, don’t get them mixed up or you will not be able to boot and mount the file systems needed.
So…
“mount /dev/md2 /mnt/raidmd2”
(as this is the new md “/” file system, and we need to access the fstab on it.)
Now open the “/mnt/raidmd2/etc/fstab” file, so we can edit it,
you can use any txt editor you like, I use nano, so let’s go ahead and add the new line entry’s we need     into the fstab like I have (see below).
Now go to “/mnt/raidmd2/etc/fstab”, and comment out the line containing the current mount point “/boot”,
and add a new line with the UUID of the new “/dev/md1” file system:
–> #/dev/sda1     /boot         ext2         defaults         1 1
UUID=0b0fddf7-1160-4c70-8e76-5a5a5365e07d     /boot     ext2     defaults     1 1
Repeat this for the new “/” entry also, adding the UUID of the file system for the new “/” md2 device.
–>#/dev/sda2 /             ext4         defaults         1 1
UUID=36d389c4-fc0f-4de7-a80b-40cc6dece66f     /     ext4     defaults         1 1

Continue to add the new “/var” (md3) & “/home” (md9) using the new “UUID’d”.  You can keep the existing lines for the mounts “/var” and “/home” intact, if you like and come back them later after you have tested everything, but I do them all at once.

Add the new lines for mounting “/var” & “/home”
(so this time keep the NEW entries commented for the moment, like shown below, so we can do a test boot.).
/dev/sda3    /var         xfs         defaults         1 2
–>    #UUID=47fbbe32-c756-4ea6-8fd6-b34867be0c84     /var         xfs     defaults     1 2

/dev/sda4     /home         xfs         defaults         1 2
–>    #UUID=f92cc249-c1af-456b-a291-ee1ea9ef8e22     /home         xfs     defaults     1 2

I have included a sample of mine, for you on the next page.
🙂
——————————————————————————– (Table Of Contents)
NOTE: Example of mine, I’ve used the “UUID’s”, for all my entries, this is the way of guaranteeing that you boot up with the disks you want.
# /etc/fstab
# Created by anaconda on Thu Oct  2 13:40:18 2014
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
#
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

# original entry ‘/dev/sda1 /boot’
#UUID=2e706331-a2ca-43ef-a8c7-b9e9eaecdad9 /boot                   ext2    defaults        1 2

# New entry for ‘/dev/md1: LABEL=”boot”‘
UUID=ecb6f984-bcbc-4976-ab6f-713a43e694c2 /boot                  ext2    defaults    1 2

# original entry ‘/dev/sda2 /root’
#UUID=7b11ed4d-d5f6-4da8-8d35-9d51842e38f8 /                       ext4    defaults        1 1

# New entry for ‘/dev/md2: LABEL=”root”‘
UUID=8c9db010-287b-4c67-82af-665a8ec28625 /                      ext4    defaults    1 2

# original entry ‘/dev/sda3 /var’
#UUID=66203016-2a63-4610-a887-ccea3b23e586 /var                    xfs     defaults        1 2

# New entry for ‘/dev/md3: LABEL=”var”‘
UUID=239d520c-ba59-471a-b2f1-a352acc7727a /var                  xfs     defaults        1 2

# original entry ‘/dev/sda4’
#UUID=986cb564-ebd2-41bd-8d5c-553b3b0c70f7 /home                   xfs     defaults        1 2

# New entry for ‘/dev/md9: LABEL=”home”‘
UUID=01abc9e9-7b57-47d2-aac8-13a5cd7568f8 /home                 xfs     defaults        1 2

Now save your changes, and unmount “raidmd2”:
“umount /mnt/raidmd2”.

Next we will prepare to mount the new “/boot” on “/dev/md1”
“mount /dev/md1 /mnt/raidmd1”
this way we can make the necessary changes to grub, so we can tell the boot loader where the “/” file system is, “/dev/md2”, (aka: “/”) at the next boot up.
——————————————————————————– (Table Of Contents)
14. Modify the grub “/boot” information
Mount /dev/md1 again to /mnt/raidmd1.

“mount /dev/md1 /mnt/raidmd1”
(as this is the new md “/boot” file system, and we need to access the grub boot loader on it.)

Now normally we should just edit the “/boot/grub/menu.lst”, on the new “md1” for the kernel to boot with the new “md” settings, but for some reason I have been having trouble, so I edited “grub.conf” directly.

Look for the line with the original entry that has similar information like the following:
(as you can see, from my title section, it show you what version kernel I’m running), I have highlighted what edits I have made, in the hope it gives you an idea of what’s required to modify.
(hey..I know that there’s probably an alternative way of doing this but it’s all I found googling)

NOTE:     Remember our MD setup, as normally most would start with “md0 (zero)”, but we started with md1,
so the “root=md2’s UUID”, or you can use “root=/dev/md2”
title CentOS_v6.5 (2.6.32-431.29.2.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-431.29.2.el6.x86_64 ro root=UUID=8c9db010-287b-4c67-82af-665a8ec28625    rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16     crashkernel=128M KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb raid=noautodetect quiet
initrd /initramfs-2.6.32-431.29.2.el6.x86_64.img

NOTE:    Create a new entry in the menu so you can boot by way of menu choice.
just copy and paste the current one, add it underneath and make the necessary modifications.
(You can also use the “UUID” or the device name, “md1”, “sda1”, “sda2”, etc..)
Here’s my new entry, note the changes, I’ve chosen to use the “UUID” and to keep the “raid=noautomounting” entry, this stops CentOS6.5 from trying to auto discover and mount raid sets.

title CentOS_v6.5 (2.6.32-431.29.2.el6.x86_64 “Boot From MD2″ )
root (hd0,0)
kernel /vmlinuz-2.6.32-431.29.2.el6.x86_64 ro root=/dev/md2 NG=en_US.UTF-8     SYSFONT=latarcyrheb sun16 crashkernel=128M KEYBOARDTYPE=pc KEYTABLE=us rhgb     raid=noautodetect
initrd /initramfs-2.6.32-431.29.2.el6.x86_64.img
That way if anything goes wrong with the boot you can reboot and choose the original boot option.
–> “kernel THE-PATH-TO-KERNEL ro root=/dev/sda2 WITH-A-LOT-OF-OTHER-OPTIONS”,
comment it out, and copy and paste the copy underneath the original entry, and change it to
–> “kernel THE-PATH-TO-KERNEL “ro root=/dev/md2 WITH-A-LOT-OF-OTHER-OPTIONS”

NOTE: Look to make sure that the are no other options that might EXCLUDE md devices from starting!
Make BACKUPS of these files in case you have to restore them in a hurry.
Also to make sure that system will boot from the new “MD raid array” we need to copy the following files into your current active system:
“mount /dev/md1 /mnt/raidmd1”

Lets do “grub.conf” first
mv /boot/grub/menu.lst /boot/grub/menu.lst.pre-raid
cp -p /mnt/raidmd1/grub/menu.lst /boot/grub/menu.lst

mv /boot/grub/grub.conf /boot/grub/grub.conf.pre-raid
cp -p /mnt/raidmd1/grub/grub.conf /boot/grub/grub.conf

Now for the “fstabs”
“mount /dev/md2 /mnt/raidmd2”
mv /etc/fstab /etc/fstab.pre-raid

cp -p /mnt/raidmd2/etc/fstab /etc/fstab
Now the moment of truth….!  Lets “REBOOT”
——————————————————————————– (Table Of Contents)
15. Reboot the machine.
————————————-
Either use the function key during boot up to change the boot disk (from “sda” to “sdb”), or enter the system BIOS, and choose the new disk as the one that your system boots from, save the BIOS setting and boot.

Here’s to all going well
Take a break , you’re going to need …
But wait there’s more…
——————————————————————————– (Table Of Contents)
Welcome back…!
16. Now to boot up from the newly created MD’s
Assuming the reboot went smoothly, and you should have now chosen to boot up from the newly
created md disk “sdb”, change the existing partitions of the old (original) drive “sda” to be raid device partitions.   Let’s check the partition tables to confirm which disk is the old (original “sda”), and which is the new “sdb” one:

using the command     “fdisk –l /dev/sda”
This command will return the partition information, but could also return a partition error, don’t worry its because “fdisk” reads the disk in cylinder mode, but after a good think, I realised that modern systems use LBA (Logical Block Addressing) instead of CHS (Cylinder/Head/Sector) to address disk drives.  So let’s use “sfdisk “ this time to view the partition table using sectors instead of cylinders:

using the command     “sfdisk -uS -l /dev/sda”
No examine the output to see which disk has a partitions “type 83 Linux” (the default linux partition type).  That’s the one that will be our old (original) drive, “sda” system disk, now by using fdisk, cfdisk or parted, we need to change the partition type to “0xfd” or “fd” as “fdisk” lists it, on all of this disks 4 partitions.  We’ll use the “fdisk command mode”

Note:     if you have a Desktop GUI installed you can use tools like “Gparted” for this also.
(Ill post a separate “how to perform various tasks with Gparted, later)

“fdisk /dev/sda”
(output)
The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.
WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).
Command (m for help):
now you’re in “command mode” typing “m” at the prompt will list all the actions you can perform. So type “p” to see the partition tables and partition types.

(output)
Command action
a   toggle a bootable flag
b   edit bsd disklabel
c   toggle the dos compatibility flag
d   delete a partition
l   list known partition types
m   print this menu
n   add a new partition
o   create a new empty DOS partition table
p   print the partition table
q   quit without saving changes
s   create a new empty Sun disklabel
t   change a partition’s system id
u   change display/entry units
v   verify the partition table
w   write table to disk and exit
x   extra functionality (experts only)

Command (m for help): p
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0004ce27

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         131     1048576   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             131       51124   409600000   83  Linux
/dev/sda3           51124      146736   768000000   83  Linux
/dev/sda4          146736      242347   768000000   83  Linux

Then choose command option “fd”, then press enter, then take option “w”, presenter  to write the changes.  Do this to all of the partitions on this disk.  We will change the partition table after we reboot it safer that way. So let’s see if we can boot correctly.  If you have booted correctly and back into “md2”continue, (otherwise review the stages and see if you can work out what went wrong).
——————————————————————————– (Table Of Contents)
16 (Part B)
NOTE:     I am assuming here that we are still on “sdb” (in other words you booted using the MD RAID disk),
and that this is still “/dev/sda”, we are working on, so let’s update it’s tables.

Now let’s run the command      “partprobe”
Note    (“partprobe” was commonly used in RHEL 5 to inform the OS of partition table changes on the disk.
In RHEL 6, it will only trigger the OS to update the partitions on a disk that none of its partitions are in use
(e.g. mounted).   If any partition on a disk is in use, “partprobe” will not trigger the OS to update partitions
in the system because it is considered unsafe in some situations.)
Note     You should see something similar to the following, on your screen:
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy).
As a result, it may not reflect all of your changes until after reboot.

Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdb (Device or resource busy).
As a result, it may not reflect all of your changes until after reboot.

Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdc (Device or resource busy).
As a result, it may not reflect all of your changes until after reboot.

Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdd (Device or resource busy).
As a result, it may not reflect all of your changes until after reboot.
NOTE:    Once again I am assuming that the old disk still shows up as “sda”.
Now its time to add the newly modified partitions to the RAID’s to make them complete:
mdadm /dev/md1 -a /dev/sda1
mdadm /dev/md2 -a /dev/sda2
mdadm /dev/md3 -a /dev/sda3
mdadm /dev/md9 -a /dev/sda4
To see what’s going on, use the following command, in a new console window as “root”.
(the output should look similar to the one below and will be updated every 5 seconds)
watch -n 5 cat/proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb1[1] sda1[0]
473792 blocks [2/2] [UU]
[===>……………..] recovery 25.0% (118448/473792) finish=2.4min speed=2412

md2 : active raid1 sdb2[1] sda2[0]
4980032 blocks [2/2] [UU]
resync=DELAYED

md3 : active raid1 sdb3[1] sda3[0]
3349440 blocks [2/2] [UU]
resync=DELAYED

md9 : active raid1 sdb4[1] sda4[0]
80192 blocks [2/2] [UU]
resync=DELAYED
unused devices: <none>

As soon as all the md devices are done with the recovery process, your system is in essence up and running..
——————————————————————————– (Table Of Contents)
(cont..)
Next we are going to add performance and redundancy, so we will need to add some additional steps to make sure we have a healthy running system in place.
Most of all, your system needs to be able to boot even if the first hard disk fails, so for this to happen, the following steps need to be performed:

***WARNING – THESE INSTRUCTIONS ASSUME YOU ARE USING THE OLD STYLE GRUB (GRUB 1)***
(I currently don’t have “GRUB2” instructions, but I’m confident you will find lots of instructions
and tutorials on the “GRUB 2” steps on the internet, if required.)
——————————————————————————– (Table Of Contents)

17. Create a boot record on the second hard disk.

***WARNING – THESE INSTRUCTIONS ASSUME YOU ARE USING THE OLD STYLE GRUB (GRUB 1)***
(I currently don’t have “GRUB2” instructions, but I’m confident you will find lots of instructions
and tutorials on the “GRUB 2” steps on the internet, if required.)

To create a boot record on the second hard disk, start a grub shell:
grub
grub>

Set the root device temporarily to the second disk (sda):
grub> root (hd1,0)
File system type is ext2fs, partition type is 0xfd

grub> setup (hd1)

Checking if “/boot/grub/stage1” exists … no
Checking if “/grub/stage1” exists … yes
Checking if “/grub/stage2” exists … yes
Checking if “/grub/e2fs_stage1_5” exists … yes
Running “embed /grub/e2fs_stage1_5 (hd1)” … 16 sectors embedded.
succeeded
Running “install /grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/grub/stage2 /grub/grub.conf”… succeeded
Done.
——————————————————————————– (Table Of Contents)
Repeat for the first disk (currently sdb):

grub> root (hd0,0)
file system type is ext2fs, partition type is 0xfd

grub> setup (hd0)

Checking if “/boot/grub/stage1” exists … no
Checking if “/grub/stage1” exists … yes
Checking if “/grub/stage2” exists … yes
Checking if “/grub/e2fs_stage1_5” exists … yes
Running “embed /grub/e2fs_stage1_5 (hd1)” … 16 sectors embedded.
succeeded
Running “install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf”… succeeded

grub> quit
Done.
Reboot the system:

reboot
————————————-

It should boot without problems.

If so, let’s power down and test the system can boot from either disk.
Power Down, disconnect the first disk (sda) and try again. Does it boot?

If so, let’s power down and test the system can boot from either disk.
Power Down, disconnect the second disk (sdb) and try again. Does it boot?

——————————————————————————– (Table Of Contents)

Leave a comment

Your email address will not be published. Required fields are marked *