IBM GPFS‎ > ‎

GPFS Installation for Linux

A.        測試環境:

I.       TSMHA1 (172.20.144.135)

 i.    Quorum-manager: Linux Server

II.     TSMHA2 (172.20.144.136)

 i.     Quorum-manager: Linux Server

III.     GPFS掛載點:/GPFSNSD

B.         GPFS安裝套件

I.       gpfs_install-3.4.0-0_x86_64

II.      gpfs patch 3.4.0.13

C.         GPFS安裝程序

1.          設定各個Node主機SSH免密碼互連

TSMHA1

# ssh-keygen -t rsa

# cd ~/.ssh/

# cat id_rsa.pub >> authorized_keys

# chmod 600 authorized_keys

# scp authorized_keys root@TSMHA2:~/.ssh/

# scp id_rsa* root@TSMHA2:~/.ssh/

2.          安裝GPFS主程式及更新檔

# ./gpfs_install-3.4.0-0_x86_64 --silent

# cd /usr/lpp/mmfs/3.4/

# chmod +x ./*

# rpm -ivh /usr/lpp/mmfs/3.4/gpfs*.rpm

# cd /{gpfs_source}/patch/

# rpm -Uvh gpfs*.rpm

# rpm -qa | grep gpfs

3.          環境設定及程式編譯

# export SHARKCLONEROOT=/usr/lpp/mmfs/src

# cd /usr/lpp/mmfs/src

# make Autoconfig

# make World

# make InstallImages

# vim ~/.bashrc

 ============================

PATH=$PATH:/usr/lpp/mmfs/bin

 ============================

# source ~/.bashrc

4.          GPFS套件安裝完畢,以下開始GPFS參數設定。

5.          建立NodeNSD disk檔案

建立Node檔案

# vim /root/gpfs_allnodes

======================

TSMHA1:quorum-manager

TSMHA2:quorum-manager

======================

查詢可用GPFS Disk,並建立NSD disk檔案。

# cat /proc/partitions  //先確認可用Disk

# vim /root/gpfs_alldisks

==============================

/dev/sdc1:TSMHA1::dataAndMetadata:-1:mynsd1:system

==============================

語法: DeviceName:PrimaryNSDServer:SecondaryNSDServer:DiskUsage:FailureGroup

NSD 是由磁碟映射出来的虛擬設備,NSD 與磁碟是一一對應的關係。NSD 被標記了不同屬性來區分其用途,我們可以將磁碟標記為 4 種用途:

Desc Only:只儲存GPFS文件系统描述資料的磁碟。
Data Only
:只儲存文件系统中的數據資料。
Meta data only:
只儲存文件系统中的目錄結構 inode 資料。
Meta and data:
儲存所有資料(default)

6.          使用mmcrcluster指令建立Cluster,其中TSMHA1Primary nodeTSMHA2Secondary node,在Primary node主機上執行下列指令。

# mmcrcluster -N /root/gpfs_allnodes -p TSMHA1 -s TSMHA2 -r /usr/bin/ssh -R /usr/bin/scp

Fri Apr 20 11:45:04 CST 2012: mmcrcluster: Processing node TSMHA1

Fri Apr 20 11:45:04 CST 2012: mmcrcluster: Processing node TSMHA2

mmcrcluster: Command successfully completed

mmcrcluster: Warning: Not all nodes have proper GPFS license designations.

Use the mmchlicense command to designate licenses as needed.

mmcrcluster: Propagating the cluster configuration data to all

affected nodes.  This is an asynchronous process.

7.          分配License給各個Node

# mmchlicense server --accept -N TSMHA1,TSMHA2

The following nodes will be designated as possessing GPFS server licenses:

       TSMHA1

       TSMHA2

mmchlicense: Command successfully completed

mmchlicense: Propagating the cluster configuration data to all

 affected nodes.  This is an asynchronous process.

8.          檢查Cluster是否建立成功

# mmlscluster

GPFS cluster information

========================

 GPFS cluster name:         TSMHA1

 GPFS cluster id:           12399694584887696320

 GPFS UID domain:         TSMHA1

 Remote shell command:      /usr/bin/ssh

  Remote file copy command:   /usr/bin/scp

 

GPFS cluster configuration servers:

-----------------------------------

  Primary server:    TSMHA1

  Secondary server:  TSMHA2

 

 Node  Daemon node name            IP address       Admin node name             Designation

-----------------------------------------------------------------------------------------------

  1   TSMHA1                      172.20.144.135   TSMHA1                      quorum-manager

      2   TSMHA2                      172.20.144.136   TSMHA2                      quorum-manager

9.          建立NSD,並檢查是否正常。

# mmcrnsd -F /root/gpfs_alldisks -v yes

mmcrnsd: Processing disk sdc1

mmcrnsd: Propagating the cluster configuration data to all

# mmlsnsd

 File system   Disk name    NSD servers

---------------------------------------------------------------------------

 (free disk)    mynsd1      TSMHA1

10.      啟動GPFS

# mmstartup -a

Fri Apr 20 12:26:56 CST 2012: mmstartup: Starting GPFS ...

# mmgetstate -aLs

 Node number  Node name        GPFS state

------------------------------------------

       1      TSMHA1           active

       2      TSMHA2           active

# lsmod | grep mm

mmfs26               1630692  1

mmfslinux             269452   38 mmfs26

tracedev               29520    3 mmfs26,mmfslinux

# ps aux | grep mmfs

root      7245  0.0  0.1 111080  5760 ?        S<   12:26   0:00 /bin/ksh /usr/lpp/mmfs/bin/runmmfs

root      7366  0.0  3.5 817732 110540 ?       S<Ll 12:26   0:00 /usr/lpp/mmfs/bin//mmfsd

11.      建立GPFS檔案系統

# mmcrfs gpfsfs -F /root/gpfs.alldisks -A yes -B 1M -T /GPFSNSD -v yes

The following disks of gpfsfs will be formatted on node TSMHA1:

gpfs1nsd: size 31455216 KB

Formatting file system ...

Disks up to size 403 GB can be added to storage pool 'system'.

Creating Inode File

Creating Allocation Maps

Creating Log Files

Clearing Inode Allocation Map

Clearing Block Allocation Map

Formatting Allocation Map for storage pool 'system'

Completed creation of file system /dev/gpfsfs.

mmcrfs: Propagating the cluster configuration data to all

 

参數定義如下:

-F 指定 NSD 的文件名

-A 自动 mount  選項為  yes

-B 區塊大小為1M

-v 驗証建立磁碟是否已有Filesystem

 

# mmchconfig autoload=yes

# mmlsdisk gpfsfs -L

disk         driver   sector failure holds    holds                            storage

name         type       size   group metadata data  status        availability pool

------------ -------- ------ ------- -------- ----- ------------- ------------ ------------

gpfsfs       nsd         512      -1 Yes      Yes   ready         up           system

Number of quorum disks:  1

Read quorum value:      1

Write quorum value:     1

# cat /etc/fstab

/dev/gpfsfs        /GPFSNSD       gpfs       rw,mtime,atime,dev=gpfsfs,noauto    0 0

# mmdf gpfsfs

12.      確認GPFS設定檔資訊是否正確(系統自動寫入)

# mmlsconfig

Configuration data for cluster TSMHA1:

--------------------------------------

myNodeConfigNumber 1

clusterName TSMHA1

clusterId 12399694584887696320

autoload yes

minReleaseLevel 3.4.0.7

dmapiFileHandleSize 32

adminMode central

 

File systems in cluster TSMHA1:

-------------------------------

/dev/gpfsfs

13.      掛載GPFS檔案系統,並檢查掛載是否正常。

# mmmount all -a

Fri Apr 20 13:53:07 CST 2012: mmmount: Mounting file systems ...

# mount | grep gpfs

/dev/gpfs1nsd on /GPFSNSD type gpfs (rw,mtime,dev=gpfsfs)

# mmlsfileset gpfsfs

Filesets in file system 'gpfsfs’:

Name                     Status    Path

root                     Linked    /GPFSNSD

# mmlsmount all -L

File system gpfsfs is mounted on 2 nodes:

  172.20.144.136  TSMHA2

  172.20.144.135  TSMHA1

 

14.      清除GPFS

I.           fuser –kcu /gpfs

II.        unmount /gpfs #在所有node

III.     mmdelfs sharelv,

IV.     mmlsfs sharelv #查结果

V.        mmdelnsd –F /etc/gpfs/diskfile

VI.     mmshutdown –a

VII.  mmdelnode –n /tmp/gpfs/nodefile

VIII.  mmdelnode –f #最後刪除cluster


Comments