글 수 367
오늘은 2TB 이상의 Disk를 사용해서 Software RAID 구성하는 방법을 정리 해보고자 합니다. 아래의 구성은 3TB 디스크를 사용해서 RAID6 구성을 할것입니다.
[root@unas ~]# lsscsi [0:0:0:0] disk ATA ST3000DM001-1CH1 CC29 /dev/sda [1:0:0:0] disk ATA ST3000DM001-1CH1 CC29 /dev/sdb [2:0:0:0] disk ATA ST3000DM001-1CH1 CC29 /dev/sdc [3:0:0:0] disk ATA ST3000DM001-9YN1 CC4C /dev/sdd [4:0:0:0] disk ATA PLEXTOR PX-128M6 1.01 /dev/sde
2TB 이상의 디스크에서는 fdisk보다는 parted 를 사용해서 GPT로 운용을 해야 합니다. 아래는 Software RAID 구성을 위한 Partition Table 만드는 방법입니다.
[root@unas ~]# parted /dev/sdd GNU Parted 3.1 Using /dev/sdd Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel New disk label type? gpt (parted) unit TB (parted) mkpart Partition name? []? bay4 File system type? [ext2]? Start? 0 End? 3TB (parted) set 1 raid on (parted) align-check alignment type(min/opt) [optimal]/minimal? optimal Partition number? 1 1 aligned (parted) print Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sdd: 3.00TB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 0.00TB 3.00TB 3.00TB bay4 raid (parted) quit Information: You may need to update /etc/fstab.
위와 같이 3TB 모든 디스크를 동일하게 GPT 파티션으로 만들었으면 아래와 같이 확인 할수 있습니다.
[root@unas ~]# parted -l Model: ATA ST3000DM001-1CH1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB bay1 raid Model: ATA ST3000DM001-1CH1 (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB bay2 raid Model: ATA ST3000DM001-1CH1 (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB bay3 raid Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sdd: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB bay4 raid
그리고 아래와 같이 Software RAID 관리 툴 설치 해주고요~
[root@unas ~]# yum install mdadm Loaded plugins: fastestmirror base | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 Loading mirror speeds from cached hostfile * base: ftp.neowiz.com * extras: ftp.neowiz.com * updates: ftp.neowiz.com Resolving Dependencies --> Running transaction check ---> Package mdadm.x86_64 0:3.3.2-2.el7 will be installed --> Processing Dependency: libreport-filesystem for package: mdadm-3.3.2-2.el7.x86_64 --> Running transaction check ---> Package libreport-filesystem.x86_64 0:2.1.11-21.el7.centos.0.4 will be installed --> Finished Dependency Resolution Dependencies Resolved ===================================================================================================================================================== Package Arch Version Repository Size ===================================================================================================================================================== Installing: mdadm x86_64 3.3.2-2.el7 base 391 k Installing for dependencies: libreport-filesystem x86_64 2.1.11-21.el7.centos.0.4 base 35 k Transaction Summary ===================================================================================================================================================== Install 1 Package (+1 Dependent package) Total download size: 426 k Installed size: 920 k Is this ok [y/d/N]: y Downloading packages: (1/2): libreport-filesystem-2.1.11-21.el7.centos.0.4.x86_64.rpm | 35 kB 00:00:00 (2/2): mdadm-3.3.2-2.el7.x86_64.rpm | 391 kB 00:00:00 ----------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.4 MB/s | 426 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : libreport-filesystem-2.1.11-21.el7.centos.0.4.x86_64 1/2 Installing : mdadm-3.3.2-2.el7.x86_64 2/2 Verifying : mdadm-3.3.2-2.el7.x86_64 1/2 Verifying : libreport-filesystem-2.1.11-21.el7.centos.0.4.x86_64 2/2 Installed: mdadm.x86_64 0:3.3.2-2.el7 Dependency Installed: libreport-filesystem.x86_64 0:2.1.11-21.el7.centos.0.4 Complete!
이렇게 RAID 구성하기 위한 사전 모든 준비는 끝났네요~
[root@unas ~]# mdadm --create --level=6 --verbose --assume-clean --raid-devices=4 --spare-devices=0 /dev/md0 /dev/sd{a,b,c,d}1 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 2930134016K mdadm: automatically enabling write-intent bitmap on large array mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@unas ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid6 sdd1[3] sdc1[2] sdb1[1] sda1[0] 5860268032 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/22 pages [0KB], 65536KB chunk unused devices: [root@unas ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat May 2 09:15:10 2015 Raid Level : raid6 Array Size : 5860268032 (5588.79 GiB 6000.91 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat May 2 09:15:10 2015 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : unas:0 (local to host unas) UUID : 8ecff347:b4d81113:eeacb8ca:0fa11222 Events : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 [root@unas ~]# mdadm --detail --scan >> /etc/mdadm.conf
이렇게 하면 파일시스템을 생성하기 위한 Raid 구성은 모두 완료 된것입니다. 그리고 파일 시스템 만들기 이전에 한번 구성한 raid 를 check 할수 있는데, 아래와 같이 하면 됩니다.
[root@unas ~]# raid-check & [root@unas ~]# watch cat /proc/mdstat Every 2.0s: cat /proc/mdstat Sat May 2 09:42:16 2015 Personalities : [raid6] [raid5] [raid4] md0 : active raid6 sdd1[3] sdc1[2] sdb1[1] sda1[0] 5860268032 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] [>....................] check = 1.5% (46218000/2930134016) finish=254.8min speed=188612K/sec bitmap: 0/22 pages [0KB], 65536KB chunk unused devices: [root@unas ~]# dstat -df --dsk/sda-----dsk/sdb-----dsk/sdc-----dsk/sdd-----dsk/sde-- read writ: read writ: read writ: read writ: read writ 15M 1881k: 15M 1881k: 15M 1880k: 15M 4357k: 51k 3889B 194M 0 : 194M 0 : 194M 0 : 193M 0 : 0 0 191M 0 : 191M 0 : 191M 0 : 191M 0 : 0 0 184M 0 : 184M 0 : 184M 0 : 184M 0 : 0 0 164M 4608B: 163M 4608B: 163M 4608B: 163M 4608B: 0 0 185M 0 : 185M 0 : 185M 0 : 185M 0 : 0 0 [root@unas ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat May 2 09:15:10 2015 Raid Level : raid6 Array Size : 5860268032 (5588.79 GiB 6000.91 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat May 2 09:43:44 2015 State : clean, checking --> 체크중이라고 나오죠 Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Check Status : 2% complete Name : unas:0 (local to host unas) UUID : 8ecff347:b4d81113:eeacb8ca:0fa11222 Events : 2 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 [root@unas ~]# cat /sys/block/md0/md/sync_action check [root@unas ~]# echo check > /sys/block/md0/md/sync_action --> 체크시작 [root@unas ~]# echo idle > /sys/block/md0/md/sync_action --> 체크중단
체크 끝나면, 이제 파일 시스템 만들어서 사용하면 되겠죠, 그리고 추가로, –assume-clean 옵션을 사용하지 않을경우, 아래와 같이 나옵니다.
[root@unas ~]# mdadm --create --level=6 --verbose --raid-devices=4 --spare-devices=0 /dev/md0 /dev/sd{a,b,c,d}1 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: /dev/sda1 appears to be part of a raid array: level=raid6 devices=4 ctime=Sat May 2 09:15:10 2015 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid6 devices=4 ctime=Sat May 2 09:15:10 2015 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid6 devices=4 ctime=Sat May 2 09:15:10 2015 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid6 devices=4 ctime=Sat May 2 09:15:10 2015 mdadm: size set to 2930134016K mdadm: automatically enabling write-intent bitmap on large array Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@unas ~]# watch cat /proc/mdstat Every 2.0s: cat /proc/mdstat Sat May 2 10:11:25 2015 Personalities : [raid6] [raid5] [raid4] md0 : active raid6 sdd1[3] sdc1[2] sdb1[1] sda1[0] 5860268032 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] [>....................] resync = 0.7% (21612928/2930134016) finish=1263.5min speed=38364K/sec bitmap: 22/22 pages [88KB], 65536KB chunk unused devices: [root@unas ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat May 2 10:08:18 2015 Raid Level : raid6 Array Size : 5860268032 (5588.79 GiB 6000.91 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat May 2 10:11:40 2015 State : clean, resyncing --> 싱크중이라고 나오죠.. Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Resync Status : 0% complete Name : unas:0 (local to host unas) UUID : 2af62f04:c0997f7a:b31db5a1:602e575d Events : 40 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 [root@unas ~]# [root@unas ~]# mdadm --detail --scan > /etc/mdadm.conf [root@unas ~]# cat /etc/mdadm.conf ARRAY /dev/md0 metadata=1.2 name=unas:0 UUID=2af62f04:c0997f7a:b31db5a1:602e575d
HDD 초기 구성일경우에는 –assume-clean 옵션을 사용하기 보다는, 없이 구성함으로써, 초기 sync 작업은 한번정도 필요하지 않을까 합니다. 추후 운용중에 물론 사용할일이 없을꺼고, Volume 복구시에나 사용하지 않을까 하네요~~