본문 바로가기

리눅스/centos

[centos] softraid 복구 테스트 (요점 정리)

반응형

복구 테스트 (강제적으로 레이두1의 0마운트 해제 하자)


mdadm --manage /dev/md0 --fail /dev/sdb1

mdadm --manage /dev/md1 --fail /dev/sdb2

mdadm --manage /dev/md2 --fail /dev/sdb3

mdadm --manage /dev/md0 --remove /dev/sdb1

mdadm --manage /dev/md1 --remove /dev/sdb2

mdadm --manage /dev/md2 --remove /dev/sdb3


Shut down the system:

shutdown -h now

Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put /dev/sdb in /dev/sda's place and connect the new HDD as/dev/sdb!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sda3[0]

      4594496 blocks [2/1] [U_]


md1 : active raid1 sda2[0]

      497920 blocks [2/1] [U_]


md0 : active raid1 sda1[0]

      144448 blocks [2/1] [U_]


unused devices: <none>

server1:~#

The output of

fdisk -l

should look as follows:

server1:~# fdisk -l


Disk /dev/sda: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          18      144553+  fd  Linux raid autodetect

/dev/sda2              19          80      498015   fd  Linux raid autodetect

/dev/sda3              81         652     4594590   fd  Linux raid autodetect


Disk /dev/sdb: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes


Disk /dev/sdb doesn't contain a valid partition table


Disk /dev/md0: 147 MB, 147914752 bytes

2 heads, 4 sectors/track, 36112 cylinders

Units = cylinders of 8 * 512 = 4096 bytes


Disk /dev/md0 doesn't contain a valid partition table


Disk /dev/md1: 509 MB, 509870080 bytes

2 heads, 4 sectors/track, 124480 cylinders

Units = cylinders of 8 * 512 = 4096 bytes


Disk /dev/md1 doesn't contain a valid partition table


Disk /dev/md2: 4704 MB, 4704763904 bytes

2 heads, 4 sectors/track, 1148624 cylinders

Units = cylinders of 8 * 512 = 4096 bytes


Disk /dev/md2 doesn't contain a valid partition table

server1:~#

Now we copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(If you get an error, you can try the --force option:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

)

server1:~# sfdisk -d /dev/sda | sfdisk /dev/sdb

Checking that no-one is using this disk right now ...

OK


Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track


sfdisk: ERROR: sector 0 does not have an msdos signature

 /dev/sdb: unrecognized partition table type

Old situation:

No partitions found

New situation:

Units = sectors of 512 bytes, counting from 0


   Device Boot    Start       End   #sectors  Id  System

/dev/sdb1   *        63    289169     289107  fd  Linux raid autodetect

/dev/sdb2        289170   1285199     996030  fd  Linux raid autodetect

/dev/sdb3       1285200  10474379    9189180  fd  Linux raid autodetect

/dev/sdb4             0         -          0   0  Empty

Successfully wrote the new partition table


Re-reading the partition table ...


If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)

to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1

(See fdisk(8).)

server1:~#

Afterwards we remove any remains of a previous RAID array from /dev/sdb...

mdadm --zero-superblock /dev/sdb1

mdadm --zero-superblock /dev/sdb2

mdadm --zero-superblock /dev/sdb3

... and add /dev/sdb to the RAID array:

mdadm -a /dev/md0 /dev/sdb1

mdadm -a /dev/md1 /dev/sdb2

mdadm -a /dev/md2 /dev/sdb3

Now take a look at

cat /proc/mdstat

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sdb3[2] sda3[0]

      4594496 blocks [2/1] [U_]

      [======>..............]  recovery = 30.8% (1416256/4594496) finish=0.6min speed=83309K/sec


md1 : active raid1 sdb2[1] sda2[0]

      497920 blocks [2/2] [UU]


md0 : active raid1 sdb1[1] sda1[0]

      144448 blocks [2/2] [UU]


unused devices: <none>

server1:~#

Wait until the synchronization has finished:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sdb3[1] sda3[0]

      4594496 blocks [2/2] [UU]


md1 : active raid1 sdb2[1] sda2[0]

      497920 blocks [2/2] [UU]


md0 : active raid1 sdb1[1] sda1[0]

      144448 blocks [2/2] [UU]


unused devices: <none>

server1:~#

Then run

grub

and install the bootloader on both HDDs:

root (hd0,0)

setup (hd0)

root (hd1,0)

setup (hd1)

quit

That's it. You've just replaced a failed hard drive in your RAID1 array.

 

10 Links

• The Software-RAID Howto: http://tldp.org/HOWTO/Software-RAID-HOWTO.html

• Debian: http://www.debian.org

previous

 

up

How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3


6 Preparing GRUB 복구

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)

 Filesystem type is ext2fs, partition type 0x83

grub>

setup (hd0)

grub> setup (hd0)

 Checking if "/boot/grub/stage1" exists... no

 Checking if "/grub/stage1" exists... yes

 Checking if "/grub/stage2" exists... yes

 Checking if "/grub/e2fs_stage1_5" exists... yes

 Running "embed /grub/e2fs_stage1_5 (hd0)"...  15 sectors are embedded.

succeeded

 Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded

Done.


grub>

root (hd1,0)

grub> root (hd1,0)

 Filesystem type is ext2fs, partition type 0xfd


grub>

setup (hd1)

grub> setup (hd1)

 Checking if "/boot/grub/stage1" exists... no

 Checking if "/grub/stage1" exists... yes

 Checking if "/grub/stage2" exists... yes

 Checking if "/grub/e2fs_stage1_5" exists... yes

 Running "embed /grub/e2fs_stage1_5 (hd1)"...  15 sectors are embedded.

succeeded

 Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded

Done.


grub>

quit

Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:

reboot


원본 위치 <http://www.howtoforge.com/software-raid1-grub-boot-debian-etch-p2> 




반응형

'리눅스 > centos' 카테고리의 다른 글

[centos] my.cnf 위치찾기  (0) 2014.03.23
[centos]hostname 변경  (0) 2014.03.22
[centos] softraid 복구 테스트  (0) 2014.03.20
[centos] softraid 로그확인  (0) 2014.03.20
[centos]패스워드 복구  (0) 2014.03.19