Skip to content

2TB이상의 Disk를 사용한 Raid6 구성

조회 수 49516 추천 수 0 2015.08.08 13:28:17

오늘은 2TB 이상의 Disk를 사용해서 Software RAID 구성하는 방법을 정리 해보고자 합니다. 아래의 구성은  3TB 디스크를 사용해서 RAID6 구성을 할것입니다.

[root@unas ~]# lsscsi
[0:0:0:0]    disk    ATA      ST3000DM001-1CH1 CC29  /dev/sda 
[1:0:0:0]    disk    ATA      ST3000DM001-1CH1 CC29  /dev/sdb 
[2:0:0:0]    disk    ATA      ST3000DM001-1CH1 CC29  /dev/sdc 
[3:0:0:0]    disk    ATA      ST3000DM001-9YN1 CC4C  /dev/sdd 
[4:0:0:0]    disk    ATA      PLEXTOR PX-128M6 1.01  /dev/sde 

2TB 이상의 디스크에서는 fdisk보다는 parted 를 사용해서 GPT로 운용을 해야 합니다. 아래는 Software RAID 구성을 위한 Partition Table 만드는 방법입니다.

[root@unas ~]# parted /dev/sdd
GNU Parted 3.1
Using /dev/sdd
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel
New disk label type? gpt
(parted) unit TB
(parted) mkpart
Partition name?  []? bay4
File system type?  [ext2]?
Start? 0
End? 3TB
(parted) set 1 raid on
(parted) align-check
alignment type(min/opt)  [optimal]/minimal? optimal
Partition number? 1
1 aligned
(parted) print
Model: ATA ST3000DM001-9YN1 (scsi)
Disk /dev/sdd: 3.00TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      0.00TB  3.00TB  3.00TB               bay4  raid

(parted) quit
Information: You may need to update /etc/fstab.

위와 같이 3TB 모든 디스크를 동일하게 GPT 파티션으로 만들었으면 아래와 같이 확인 할수 있습니다.

[root@unas ~]# parted -l
Model: ATA ST3000DM001-1CH1 (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  3001GB  3001GB               bay1  raid


Model: ATA ST3000DM001-1CH1 (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  3001GB  3001GB               bay2  raid


Model: ATA ST3000DM001-1CH1 (scsi)
Disk /dev/sdc: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  3001GB  3001GB               bay3  raid


Model: ATA ST3000DM001-9YN1 (scsi)
Disk /dev/sdd: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  3001GB  3001GB               bay4  raid

그리고 아래와 같이 Software RAID 관리 툴 설치 해주고요~

[root@unas ~]# yum install mdadm
Loaded plugins: fastestmirror
base                                                                                                                          | 3.6 kB  00:00:00     
extras                                                                                                                        | 3.4 kB  00:00:00     
updates                                                                                                                       | 3.4 kB  00:00:00     
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Resolving Dependencies
--> Running transaction check
---> Package mdadm.x86_64 0:3.3.2-2.el7 will be installed
--> Processing Dependency: libreport-filesystem for package: mdadm-3.3.2-2.el7.x86_64
--> Running transaction check
---> Package libreport-filesystem.x86_64 0:2.1.11-21.el7.centos.0.4 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================
 Package                                   Arch                        Version                                       Repository                 Size
=====================================================================================================================================================
Installing:
 mdadm                                     x86_64                      3.3.2-2.el7                                   base                      391 k
Installing for dependencies:
 libreport-filesystem                      x86_64                      2.1.11-21.el7.centos.0.4                      base                       35 k

Transaction Summary
=====================================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 426 k
Installed size: 920 k
Is this ok [y/d/N]: y
Downloading packages:
(1/2): libreport-filesystem-2.1.11-21.el7.centos.0.4.x86_64.rpm                                                               |  35 kB  00:00:00     
(2/2): mdadm-3.3.2-2.el7.x86_64.rpm                                                                                           | 391 kB  00:00:00     
-----------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                1.4 MB/s | 426 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libreport-filesystem-2.1.11-21.el7.centos.0.4.x86_64                                                                              1/2 
  Installing : mdadm-3.3.2-2.el7.x86_64                                                                                                          2/2 
  Verifying  : mdadm-3.3.2-2.el7.x86_64                                                                                                          1/2 
  Verifying  : libreport-filesystem-2.1.11-21.el7.centos.0.4.x86_64                                                                              2/2 

Installed:
  mdadm.x86_64 0:3.3.2-2.el7                                                                                                                         

Dependency Installed:
  libreport-filesystem.x86_64 0:2.1.11-21.el7.centos.0.4                                                                                             

Complete!

이렇게  RAID 구성하기 위한 사전 모든 준비는 끝났네요~

[root@unas ~]# mdadm --create --level=6 --verbose --assume-clean --raid-devices=4 --spare-devices=0 /dev/md0 /dev/sd{a,b,c,d}1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 2930134016K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@unas ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid6 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      5860268032 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

unused devices: 
[root@unas ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat May  2 09:15:10 2015
     Raid Level : raid6
     Array Size : 5860268032 (5588.79 GiB 6000.91 GB)
  Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat May  2 09:15:10 2015
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : unas:0  (local to host unas)
           UUID : 8ecff347:b4d81113:eeacb8ca:0fa11222
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
[root@unas ~]# mdadm --detail --scan >> /etc/mdadm.conf

이렇게 하면 파일시스템을 생성하기 위한 Raid 구성은 모두 완료 된것입니다. 그리고 파일 시스템 만들기 이전에 한번 구성한 raid 를 check 할수 있는데, 아래와 같이 하면 됩니다.

[root@unas ~]# raid-check &
[root@unas ~]# watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat                                                                                                 Sat May  2 09:42:16 2015

Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      5860268032 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      [>....................]  check =  1.5% (46218000/2930134016) finish=254.8min speed=188612K/sec
      bitmap: 0/22 pages [0KB], 65536KB chunk

unused devices: 

[root@unas ~]# dstat -df
--dsk/sda-----dsk/sdb-----dsk/sdc-----dsk/sdd-----dsk/sde--
 read  writ: read  writ: read  writ: read  writ: read  writ
  15M 1881k:  15M 1881k:  15M 1880k:  15M 4357k:  51k 3889B
 194M    0 : 194M    0 : 194M    0 : 193M    0 :   0     0 
 191M    0 : 191M    0 : 191M    0 : 191M    0 :   0     0 
 184M    0 : 184M    0 : 184M    0 : 184M    0 :   0     0 
 164M 4608B: 163M 4608B: 163M 4608B: 163M 4608B:   0     0 
 185M    0 : 185M    0 : 185M    0 : 185M    0 :   0     0 

[root@unas ~]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat May  2 09:15:10 2015
     Raid Level : raid6
     Array Size : 5860268032 (5588.79 GiB 6000.91 GB)
  Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat May  2 09:43:44 2015
          State : clean, checking --> 체크중이라고 나오죠
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Check Status : 2% complete

           Name : unas:0  (local to host unas)
           UUID : 8ecff347:b4d81113:eeacb8ca:0fa11222
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1

[root@unas ~]# cat /sys/block/md0/md/sync_action 
check
[root@unas ~]# echo check > /sys/block/md0/md/sync_action --> 체크시작
[root@unas ~]# echo idle > /sys/block/md0/md/sync_action --> 체크중단

체크 끝나면, 이제 파일 시스템 만들어서 사용하면 되겠죠, 그리고 추가로, –assume-clean 옵션을 사용하지 않을경우, 아래와 같이 나옵니다.

[root@unas ~]# mdadm --create --level=6 --verbose --raid-devices=4 --spare-devices=0 /dev/md0 /dev/sd{a,b,c,d}1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sda1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Sat May  2 09:15:10 2015
mdadm: /dev/sdb1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Sat May  2 09:15:10 2015
mdadm: /dev/sdc1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Sat May  2 09:15:10 2015
mdadm: /dev/sdd1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Sat May  2 09:15:10 2015
mdadm: size set to 2930134016K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@unas ~]# watch cat /proc/mdstat 
Every 2.0s: cat /proc/mdstat                                                                                                 Sat May  2 10:11:25 2015

Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      5860268032 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      [>....................]  resync =  0.7% (21612928/2930134016) finish=1263.5min speed=38364K/sec
      bitmap: 22/22 pages [88KB], 65536KB chunk

unused devices: 
[root@unas ~]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat May  2 10:08:18 2015
     Raid Level : raid6
     Array Size : 5860268032 (5588.79 GiB 6000.91 GB)
  Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat May  2 10:11:40 2015
          State : clean, resyncing --> 싱크중이라고 나오죠..
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

  Resync Status : 0% complete

           Name : unas:0  (local to host unas)
           UUID : 2af62f04:c0997f7a:b31db5a1:602e575d
         Events : 40

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
[root@unas ~]# 
[root@unas ~]# mdadm --detail --scan > /etc/mdadm.conf 
[root@unas ~]# cat /etc/mdadm.conf 
ARRAY /dev/md0 metadata=1.2 name=unas:0 UUID=2af62f04:c0997f7a:b31db5a1:602e575d

HDD 초기 구성일경우에는 –assume-clean 옵션을 사용하기 보다는, 없이 구성함으로써, 초기 sync 작업은 한번정도 필요하지 않을까 합니다. 추후 운용중에 물론 사용할일이 없을꺼고, Volume 복구시에나 사용하지 않을까 하네요~~

profile

일요일은 짜빠게뤼~ 먹는날~^^

엮인글 :
http://adminplay.com/418522/44c/trackback
List of Articles
번호 제목 글쓴이 날짜 조회 수

리눅스에서 GNU Parted로 4K 섹터 디스크 파티션 정렬하기

클라이언트가 ssh접속시 서버의 RSA키값 변경으로 인한 접...

error: Hm, kex protocol error: type 30 seq 1 [preauth]... file

ZFS 로 SSD 캐쉬 RAID ISCSI 노하드 서버 쉽게 만들자

2TB이상의 Disk를 사용한 Raid6 구성

HP 서버 disk 증설 (hpacucli 사용)

EXT4 파일 시스템을 Btrfs 파일 시스템으로 변환하기

[linux][명령어 백그라운드 실행]

[맥] 커맨드 라인에서 CD/DVD 꺼내기

Install 3ware (LSI) 9690SA SCSI RAID Controller VIB On... file

[CentOS] sshfs - 리눅스 윈도우간 데이터 전송을 편리하게 file

Dell OMSA 설치 및 이용방법

[우분투] 노하드 서버 설정하기

원격데스크톱(RDP) 클라이언트 접속기록(Log) 삭제하기 file

cacti plugin thold 설치

Hamlet 은 “웹서버 모니터링 오픈소스 솔루션” 입니다.

Zabbix 웹서비스 모니터링

[Linux] Proxy 서버 설정하기

How to Enable EPEL Repository for RHEL/CentOS 6/5

vi 이미 읽은 파일의 인코딩 변경하기 ( euc-kr 로 encodi...

Copyright ADMINPLAY corp. All rights reserved.

abcXYZ, 세종대왕,1234

abcXYZ, 세종대왕,1234