Skip to main content

HOW TO IBM GPFS General Parallel File System

HOW TO GPFS INSTALLATION AND CONFIGURATION
 
 
WHERE THE LOGS ARE LOCATED
 
/var/adm/ras

DAEMON FOR GPFS

# ps -ef | grep mmfsd
    root 360580 409850   0 14:07:18      -  0:00 /usr/lpp/mmfs/bin/aix64/mmfsd64



en0: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.1.1.2 netmask 0xffffff00 broadcast 9.6.156.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

# ifconfig -a
en0: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.1.1.3 netmask 0xffffff00 broadcast 9.6.156.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1


PORT

#netstat -an | grep 1191


Install the GPFS product
 
It edit the /etc/inittab file adding the following lines guigpfs and mmfs



# mount 10.1.1.20:/export/nim/GPFS /mnt  (Install Media)
root@AIX1 [/]
# df -k
Filesystem    1024-blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           524288    360760   32%    10202    11% /
/dev/hd2          2097152    258180   88%    38137    35% /usr
/dev/hd9var        524288    247432   53%     7159    12% /var
/dev/hd3           262144    259752    1%       73     1% /tmp
/dev/hd1           262144    245888    7%      153     1% /home
/dev/hd11admin      131072    130708    1%        5     1% /admin
/proc                   -         -    -         -     -  /proc
/dev/hd10opt       524288    237712   55%     8451    14% /opt
/dev/livedump      262144    261776    1%        4     1% /var/adm/ras/livedump
10.1.1.20:/export/nim/GPFS    33554432   9197468   73%    35923     2% /mnt



root@AIX1 [/]
# installp -agXYd /mnt/gpfs/3.3.0.7 gpfs
0503-436 installp:  Device /mnt/gpfs/3.3.0.7 could not be accessed.
        Specify a valid device name.
root@AIX1 [/gpfs]
# installp -agXYd /mnt/gpfs/3.3.0.0 gpfs

+------------------------------------------------------------------+
                    Pre-installation Verification...
+------------------------------------------------------------------+

Verifying selections...done
Verifying requisites...done
Results...

SUCCESSES
---------
  Filesets listed in this section passed pre-installation verification
  and will be installed.

  Selected Filesets
  -----------------
  gpfs.base 3.3.0.0                           # GPFS File Manager
  gpfs.base 3.3.0.7                           # GPFS File Manager
  gpfs.docs.data 3.3.0.0                      # GPFS Server Manpages and Doc...
  gpfs.docs.data 3.3.0.1                      # GPFS Server Manpages and Doc...
  gpfs.gui 3.3.0.0                            # GPFS GUI
  gpfs.gui 3.3.0.1                            # GPFS GUI
  gpfs.msg.en_US 3.3.0.0                      # GPFS Server Messages - U.S. ...
  gpfs.msg.en_US 3.3.0.4                      # GPFS Server Messages - U.S. ...

  << End of Success Section >>

+------------------------------------------------------------------+
                   BUILDDATE Verification ...
+------------------------------------------------------------------+

Verifying build dates...done
FILESET STATISTICS
------------------
    8  Selected to be installed, of which:
        8  Passed pre-installation verification
  ----
    8  Total to be installed

+-----------------------------------------------------------------------------+
                         Installing Software...
+-----------------------------------------------------------------------------+

installp:  APPLYING software for:
        gpfs.gui 3.3.0.0


. . . . . << Copyright notice for gpfs.gui >> . . . . . . .
   (C) Copyright International Business Machines Corp. 1995, 2009.
. . . . . << End of copyright notice for gpfs.gui >>. . . .

Filesets processed:  1 of 8  (Total time:  45 secs).

installp:  APPLYING software for:
        gpfs.gui 3.3.0.1

Filesets processed:  2 of 8  (Total time:  57 secs).

installp:  APPLYING software for:
        gpfs.docs.data 3.3.0.0


. . . . . << Copyright notice for gpfs.docs >> . . . . . . .
   (C) Copyright International Business Machines Corp. 1995, 2009.
. . . . . << End of copyright notice for gpfs.docs >>. . . .

Filesets processed:  3 of 8  (Total time:  1 mins 2 secs).

installp:  APPLYING software for:
        gpfs.docs.data 3.3.0.1

Filesets processed:  4 of 8  (Total time:  1 mins 3 secs).

installp:  APPLYING software for:
        gpfs.base 3.3.0.0


. . . . . << Copyright notice for gpfs.base >> . . . . . . .
   (C) Copyright International Business Machines Corp. 1995, 2009.
. . . . . << End of copyright notice for gpfs.base >>. . . .

Filesets processed:  5 of 8  (Total time:  1 mins 41 secs).

installp:  APPLYING software for:
        gpfs.base 3.3.0.7

Filesets processed:  6 of 8  (Total time:  2 mins 5 secs).

installp:  APPLYING software for:
        gpfs.msg.en_US 3.3.0.0


. . . . . << Copyright notice for gpfs.msg.en_US >> . . . . . . .
   (C) Copyright International Business Machines Corp. 1995, 2009.
. . . . . << End of copyright notice for gpfs.msg.en_US >>. . . .

Filesets processed:  7 of 8  (Total time:  2 mins 6 secs).

installp:  APPLYING software for:
        gpfs.msg.en_US 3.3.0.4

Finished processing all filesets.  (Total time:  2 mins 7 secs).

+------------------------------------------------------------------+
                                Summaries:
+------------------------------------------------------------------+

Installation Summary
--------------------

Name                        Level           Part        Event       Result
-------------------------------------------------------------------------------
gpfs.gui                    3.3.0.0         USR         APPLY       SUCCESS   
gpfs.gui                    3.3.0.1         USR         APPLY       SUCCESS   
gpfs.docs.data              3.3.0.0         SHARE       APPLY       SUCCESS   
gpfs.docs.data              3.3.0.1         SHARE       APPLY       SUCCESS   
gpfs.base                   3.3.0.0         USR         APPLY       SUCCESS   
gpfs.base                   3.3.0.0         ROOT        APPLY       SUCCESS   
gpfs.base                   3.3.0.7         USR         APPLY       SUCCESS   
gpfs.base                   3.3.0.7         ROOT        APPLY       SUCCESS   
gpfs.msg.en_US              3.3.0.0         USR         APPLY       SUCCESS   
gpfs.msg.en_US              3.3.0.4         USR         APPLY       SUCCESS   


Validate installation packages

# lslpp -l gpfs\*
  Fileset                      Level  State      Description        
  ------------------------------------------------------------------
Path: /usr/lib/objrepos
  gpfs.base                  3.3.0.7  APPLIED    GPFS File Manager
  gpfs.gui                   3.3.0.1  APPLIED    GPFS GUI
  gpfs.msg.en_US             3.3.0.4  APPLIED    GPFS Server Messages - U.S.
                                                 English

Path: /etc/objrepos
  gpfs.base                  3.3.0.7  APPLIED    GPFS File Manager

Path: /usr/share/lib/objrepos
  gpfs.docs.data             3.3.0.1  APPLIED    GPFS Server Manpages and
                                                 Documentation
root@AIX1 [/gpfs]


Add GPFS command to the PATH
-----------------------------


PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java5/jre/bin:/usr/java5/bin:/usr/lpp/mmfs/bin


Edit /etc/hosts in both nodes
--------------------------------

10.1.1.2             AIX1            aix1
10.1.1.3             AIX2            aix2


Edit the /etc/inittab and comment the gpfsgui
----------------------------------------------


#gpfsgui:2:once:/etc/rc.d/init.d/gpfsgui start >/dev/console 2>&1
gpfsgui:2:once:/etc/rc.d/init.d/gpfsgui start >/dev/console 2>&1
mmfs:2:once:/usr/lpp/mmfs/bin/mmautoload >/dev/console 2>&1




Create the GPFS cluster
-----------------------

#mkdir /tmp/nodo1
#vi /tmp/gpfs/nodo1
aix1:manager-quorum:


# mmcrcluster -N /tmp/gpfs/nodo1 -p aix1 -r /usr/bin/ssh -R /usr/bin/scp -C clustergpfs
Tue Sep 25 14:38:02 GRNLNDST 2012: 6027-1664 mmcrcluster: Processing node AIX1
mmcrcluster: Command successfully completed
mmcrcluster: 6027-1254 Warning: Not all nodes have proper GPFS license designations.
    Use the mmchlicense command to designate licenses as needed.

Accept the license
--------------------


# mmchlicense server --accept -N aix1

The following nodes will be designated as possessing GPFS server licenses:
        AIX1
mmchlicense: Command successfully completed
root@AIX1 [/.ssh]

Validate all the Cluster information
-----------------------------------


# mmlscluster;mmlsconfig;mmlicense

GPFS cluster information
========================

  GPFS cluster name:         clustergpfs.AIX1
  GPFS cluster id:           650379674846882810
  GPFS UID domain:           clustergpfs.AIX1
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp

GPFS cluster configuration servers:
-----------------------------------

  Primary server:    AIX1
  Secondary server:  (none)

 Node  Daemon node name            IP address       Admin node name             Designation   
--------------------------------------------------------------------
   1   AIX1         10.1.1.2     AIX1            quorum-manager

Configuration data for cluster clustergpfs.AIX1:
------------------------------------------------

clusterName clustergpfs.AIX1
clusterId 650379674846882810
autoload no
minReleaseLevel 3.3.0.2
dmapiFileHandleSize 32
adminMode central

File systems in cluster clustergpfs.AIX1:
-----------------------------------------

(none)


Adding the secondary node in the GPFS CLUSTER
-----------------------------------------------


# mmaddnode -N aix2:manager-quorum:
Tue Sep 25 15:09:56 GRNLNDST 2012: 6027-1664 mmaddnode: Processing node AIX2
Verifying GPFS is stopped on all nodes ...
root@aix1's password:
root@aix1's password:
mmaddnode: Command successfully completed
mmaddnode: 6027-1254 Warning: Not all nodes have proper GPFS license designations.
    Use the mmchlicense command to designate licenses as needed.
mmaddnode: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

Accepting LICENSES
-------------------


# mmchlicense server --accept -N aix2

The following nodes will be designated as possessing GPFS server licenses:
        AIX2
mmchlicense: Command successfully completed
mmchlicense: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

Change the Cluster attributes to start auto
--------------------------------------------


# mmchcluster -s AIX2; mmchconfig autoload=yes
mmchcluster: GPFS cluster configuration servers:
mmchcluster:   Primary server:    AIX1
mmchcluster:   Secondary server:  AIX2
mmchcluster: Command successfully completed
mmchconfig: Command successfully completed
mmchconfig: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
root@AIX1 [/var/mmfs/gen]

STARTING AND CHECKING STATE
                                                               
root@AIX1 [/]
# mmstartup -a
Tue Sep 25 15:15:03 GRNLNDST 2012: 6027-1642 mmstartup: Starting GPFS ...
root@aix1's password:
root@AIX1 [/]
# mmgetstate -a
root@aix1's password:
root@aix1's password:

 Node number  Node name        GPFS state
------------------------------------------
       1      AIX1             active
       2      AIX2             active

root@AIX1 [/]

STOP A NODE
------------


root@AIX1 [/]
# mmshutdown -N 2
Tue Sep 25 15:17:53 GRNLNDST 2012: 6027-1341 mmshutdown: Starting force unmount of GPFS file systems
Tue Sep 25 15:17:58 GRNLNDST 2012: 6027-1344 mmshutdown: Shutting down GPFS daemons
AIX2:  Shutting down!
AIX2:  'shutdown' command about to kill process 356506
Tue Sep 25 15:18:04 GRNLNDST 2012: 6027-1345 mmshutdown: Finished
root@AIX1 [/]
#
root@AIX1 [/]
#
root@AIX1 [/]
# mmgetstate -a
root@aix1's password:

 Node number  Node name        GPFS state
------------------------------------------
       1      AIX1             arbitrating
       2      AIX2             down
root@AIX1 [/]
#


START THE NODES
----------------

root@AIX1 [/]
# mmstartup -a
Tue Sep 25 15:18:48 GRNLNDST 2012: 6027-1642 mmstartup: Starting GPFS ...
root@aix1's password:
AIX1:  6027-2114 The GPFS subsystem is already active.
root@AIX1 [/]
# mmgetstate -a
root@aix1's password:

 Node number  Node name        GPFS state
------------------------------------------
       1      AIX1             active
       2      AIX2             active
root@AIX1 [/]
 


 
DISCONNECT NIC TO SIMULATE ERROR
----------------------------------



root@aix2's password:

 Node number  Node name        GPFS state
------------------------------------------
       1      AIX1             unknown
       2      AIX2             active
root@AIX2 [/usr/lpp/mmfs/bin]
#


IF THE NIC IS CONNECTED AGAIN THE NODE IS BACK TO ACTIVE
--------------------------------------------------------


 Node number  Node name        GPFS state
------------------------------------------
       1      AIX1             active
       2      AIX2             active
root@AIX2 [/usr/lpp/mmfs/bin]


CHANGING A PUBLIC NETWORK BY A PRIVATE NETWORK
----------------------------------------------

CHECKING THE CONFIGURATION

# mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         clustergpfs.AIX1
  GPFS cluster id:           650379674846882810
  GPFS UID domain:           clustergpfs.AIX1
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp

GPFS cluster configuration servers:
-----------------------------------
  Primary server:    AIX1
  Secondary server:  AIX2

 Node  Daemon node name            IP address       Admin node name             Designation   
--------------------------------------------------------------------
   1   AIX1                        10.1.1.2      AIX1                        quorum-manager
   2   AIX2                        10.1.1.3      AIX2                        quorum-manager



# lsslot -c slot
# Slot                    Description       Device(s)
U9111.520.10E122F-V9-C0   Virtual I/O Slot  vsa0
U9111.520.10E122F-V9-C2   Virtual I/O Slot  ent0
U9111.520.10E122F-V9-C3   Virtual I/O Slot  ent1
U9111.520.10E122F-V9-C13  Virtual I/O Slot  vscsi0
U9111.520.10E122F-V9-C14  Virtual I/O Slot  vscsi1
root@AIX2 [/usr/lpp/mmfs/bin]

# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi0 Available  Virtual SCSI Client Adapter
vscsi1 Available  Virtual SCSI Client Adapter


STOP MMFSD
-----------

# mmshutdown -a
Tue Sep 25 15:36:00 GRNLNDST 2012: 6027-1341 mmshutdown: Starting force unmount of GPFS file systems
root@aix1's password:
Tue Sep 25 15:36:05 GRNLNDST 2012: 6027-1344 mmshutdown: Shutting down GPFS daemons
root@aix1's password: AIX2:  Shutting down!
AIX2:  'shutdown' command about to kill process 237686


root@aix1's password:
AIX1:  Shutting down!
AIX1:  'shutdown' command about to kill process 434176
AIX1:  Permission denied, please try again.
Tue Sep 25 15:36:48 GRNLNDST 2012: 6027-1345 mmshutdown: Finished


#mmchnode --admin-interface=192.168.0.1 -N aix2
#mmchnode --admin-interface=192.168.0.2 -N aix1
#mmchnode --daemon-interface=192.168.0.1 -N aix2
#mmchnode --daemon-interface=192.168.0.2 -N aix1


SETTING AS SECONDARY
mmchcluster -s aix2

# mmlscluster

 
 
GPFS cluster information
========================
  GPFS cluster name:         clustergpfs.AIX1
  GPFS cluster id:           650379674846882810
  GPFS UID domain:           clustergpfs.AIX1
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp

GPFS cluster configuration servers:
-----------------------------------

  Primary server:    gpfs1
  Secondary server:  gpfs2

 Node  Daemon node name            IP address       Admin node name             Designation   
-----------------------------------------------------------------------------------------------
   1   gpfs1                       192.168.0.2      gpfs1                       quorum-manager
   2   gpfs2                       192.168.0.1      gpfs2                       quorum-manager



CHECK ATTRIBUTES
----------------


#lsattr -E -l hdisk0 -a attribute


HOW TO DISK SHOULD BE WITH RERSERVE POLICY

#lsattr -El hdisk4 -a reserve_policy
reserve_policy no_reserve Reserve Policy True

Network Shared disk (NSD)

mmnsddiscover command to discover the disk

#mmnsddiscover




WORKING WITH THE FOLLOWING DISKS
--------------------------------
mmlsnsd (list)

mmdelnsd (delete)

mmchnds (Change disk configuration)


Identify disks chdev command

root@AIX1 [/.ssh]
# chdev -l hdisk2 -a pv=yes
hdisk2 changed
root@AIX1 [/.ssh]
# chdev -l hdisk3 -a pv=yes
hdisk3 changed
root@AIX1 [/.ssh]
# chdev -l hdisk4 -a pv=yes
hdisk4 changed
root@AIX1 [/.ssh]
# chdev -l hdisk5 -a pv=yes
hdisk5 changed
root@AIX1 [/.ssh]
# chdev -l hdisk6 -a pv=yes
hdisk6 changed
root@AIX1 [/.ssh]
# chdev -l hdisk7 -a pv=yes
hdisk7 changed
root@AIX1 [/.ssh]
# chdev -l hdisk8 -a pv=yes
hdisk8 changed


# lspv
hdisk1          00ce122f03532c9c                    None           
hdisk2          00ce122f0353a0c2                    None           
hdisk3          00ce122f0353f826                    None           
hdisk4          00ce122f0354120d                    None           
hdisk5          00ce122f03542555                    None           
hdisk6          00ce122f035448c6                    None           
hdisk7          00ce122f03545da0                    None           
hdisk8          00ce122f03547701                    None     


root@AIX2 [/.ssh]
# lspv
hdisk0          00ce122f035568d2                    None           
hdisk2          00ce122f03558c08                    None           
hdisk3          00ce122f0355a097                    None           
hdisk4          00ce122f0355b0fc                    None           
hdisk5          00ce122f0353f826                    None           
hdisk6          00ce122f0354120d                    None           
hdisk7          00ce122f03542555                    None           
hdisk8          00ce122f03547701                    None           


AIX2 discos alone
hdisk4
hdisk3
hdisk2

AIX2 disk repeated
hdisk8
hdisk7
hdisk6
hdisk5

ADD DISK TO FILES
------------------
# cat twodisk
hdisk3:gpfs2::dataAndMetadata:
hdisk2:gpfs2::dataAndMetadata:


# cat fourdisk
hdisk5:::dataAndMetadata:
hdisk6:::dataAndMetadata:
hdisk7:::dataAndMetadata:
hdisk8:::dataAndMetadata:

# cat twodisk
# hdisk6:gpfs1::dataAndMetadata:
# hdisk7:gpfs1::dataAndMetadata:

EXECUTE THE FILE WITH MMCRNSD
------------------------------

# mmcrnsd -F twodisk
mmcrnsd: Processing disk hdisk3
mmcrnsd: Processing disk hdisk2
mmcrnsd: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
root@AIX2 [/]


# mmlsnsd

 File system   Disk name    NSD servers                                   
--------------------------------------------------
 (free disk)   gpfs1nsd     (directly attached)     
 (free disk)   gpfs2nsd     (directly attached)     
 (free disk)   gpfs3nsd     (directly attached)     
 (free disk)   gpfs4nsd     (directly attached)     
 (free disk)   gpfs5nsd     (directly attached)     
 (free disk)   gpfs6nsd     (directly attached)     
 (free disk)   gpfs7nsd     (directly attached)     
 (free disk)   gpfs8nsd     (directly attached)     

How to check the disk configureation, the disk not found are visible in the other server


# mmlsnsd -X

 Disk name    NSD volume ID      Device         Devtype  Node name                Remarks         
---------------------------------------------------------------------------------------------------
 gpfs10nsd    09069CD450633350   /dev/hdisk2    hdisk    gpfs2                    server node
 gpfs11nsd    09069CDE506333C6   /dev/hdisk6    hdisk    gpfs1                    server node
 gpfs12nsd    09069CDE506333C7   /dev/hdisk7    hdisk    gpfs1                    server node
 gpfs3nsd     09069CD450632D96   /dev/hdisk3    hdisk    gpfs1                   
 gpfs3nsd     09069CD450632D96   /dev/hdisk5    hdisk    gpfs2                   
 gpfs4nsd     09069CD450632D97   /dev/hdisk4    hdisk    gpfs1                   
 gpfs4nsd     09069CD450632D97   /dev/hdisk6    hdisk    gpfs2                   
 gpfs5nsd     09069CD450632D98   /dev/hdisk5    hdisk    gpfs1                   
 gpfs5nsd     09069CD450632D98   /dev/hdisk7    hdisk    gpfs2                   
 gpfs6nsd     09069CD450632D99   /dev/hdisk8    hdisk    gpfs1                   
 gpfs6nsd     09069CD450632D99   /dev/hdisk8    hdisk    gpfs2                   
 gpfs9nsd     09069CD45063334F   /dev/hdisk3    hdisk    gpfs2                    server node

 
STOPPING GPFS


root@AIX2 [/]
# mmshutdown -a
Wed Sep 26 13:40:10 GRNLNDST 2012: 6027-1341 mmshutdown: Starting force unmount of GPFS file systems
Wed Sep 26 13:40:15 GRNLNDST 2012: 6027-1344 mmshutdown: Shutting down GPFS daemons
gpfs2:  Shutting down!
gpfs1:  Shutting down!
gpfs2:  'shutdown' command about to kill process 417992
gpfs1:  'shutdown' command about to kill process 368662
Wed Sep 26 13:40:22 GRNLNDST 2012: 6027-1345 mmshutdown: Finished
root@AIX2 [/]

ADD  DISK AS TIEBREAKER

root@AIX2 [/]
# mmchconfig tiebreakerdisks=gpfs1nsd
Verifying GPFS is stopped on all nodes ...
mmchconfig: Command successfully completed

START EL GPFS

#mmstartup


 


 
VALIDATE DISK TIE BREAKER

# mmlsconfig
Configuration data for cluster clustergpfs.AIX1:
------------------------------------------------
clusterName clustergpfs.AIX1
clusterId 650379674846882810
autoload yes
minReleaseLevel 3.3.0.2
dmapiFileHandleSize 32
tiebreakerDisks gpfs6nsd
adminMode central


IF YOU WANT TO SIMULATE A DOWNTIME YOU CAN USE mmshutdown without parameters in one of the nodes
 
 Node number  Node name        GPFS state
------------------------------------------
       1      gpfs1            down
       2      gpfs2            active

# mmgetstate -a

 Node number  Node name        GPFS state
------------------------------------------
       1      gpfs1            active
       2      gpfs2            active




CREATING A FILE SYSTEM WITH LUN SHARED

# mmcrfs gpfs1 "gpfs3nsd;gpfs4nsd" -B 512K -m 2 -r 2 -Q yes -T /gpfs1 -A automount

LOCAL DISK

mmcrfs localgpfs2 "gpfs10nsd;gpfs9nsd" -B 64K -T /localgpfs2 -A automount

mmcrfs localgpfs1 "gpfs11nsd;gpfs12nsd" -B 64K  -T /localgpfs1 -A automount



REMOVE A FILE SYSTEM

# mmumount /gpfs1
Wed Sep 26 15:02:01 GRNLNDST 2012: 6027-1674 mmumount: Unmounting file systems ...
root@AIX1 [/localgpfs1]
# man mmdelfs
root@AIX1 [/localgpfs1]
# mmdelfs /dev/gpfs1
GPFS: 6027-573 All data on following disks of gpfs1 will be destroyed:
    gpfs3nsd
    gpfs4nsd
GPFS: 6027-574 Completed deletion of file system /dev/gpfs1.
mmdelfs: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
root@AIX1 [/localgpfs1]

CAMBIAR EL PUNTO DE MONTURA

mmchfs /dev/localgpfs1 -T /nuevopunto



CAMBIAR INODOS en un File System

# mmchfs /dev/localgpfs1 -F 100000
root@AIX1 [/localgpfs1]
# mmchfs /dev/localgpfs2 -F 62000




ADD DISKS

mmcrnsd -F el FILE


# mmadddisk localgpfs2 gpfs13nsd:::dataAndMetadata:-1::: 

GPFS: 6027-531 The following disks of localgpfs2 will be formatted on node AIX1:
    gpfs13nsd: size 26214400 KB
Extending Allocation Map
Checking Allocation Map for storage pool 'system'
GPFS: 6027-1503 Completed adding disks to file system localgpfs2.
mmadddisk: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
root@AIX1 [/]

# mmdf /dev/localgpfs2
disk                disk size  failure holds    holds              free KB             free KB
name                    in KB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 50 GB)
gpfs13nsd            26214400       -1 yes      yes        26212096 (100%)           248 ( 0%)
gpfs9nsd              1048576     4002 yes      yes          588288 ( 56%)           488 ( 0%)
gpfs10nsd             5242880     4002 yes      yes         3448832 ( 66%)           504 ( 0%)
                -------------                         -------------------- -------------------
(pool total)         32505856                              30249216 ( 93%)          1240 ( 0%)

                =============                         ==================== ===================
(total)              32505856                              30249216 ( 93%)          1240 (





QUOTAS
-------

mmumount all -a

Habilitar las quotas en los devices


# mmchfs /dev/localgpfs1 -Q yes
mmchfs: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
root@AIX1 [/]
# mmchfs /dev/localgpfs2 -Q yes
mmchfs: 6027-1371 Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.


Create user and group  if you need

#mkgroup -g gpfs01

#mkuser -g user01

Edit Quotas for users and groups

mmedquotas -g gpfs01
mmedquotas -u user01

Check  quotas

#mmrepquota or mmrepquota device

#
mmrepquota /dev/localgpfs01


Create directory for the users

mkdir user01
chown -R user01:gpfs01 user01

Escribir en el file system

lmktemp file 1024m


CREATE SNAPSHOT
-------------------

# mmcrsnapshot /dev/localgpfs2 user01
Writing dirty data to disk
Quiescing all file system operations
Writing dirty data to disk again
Creating snapshot.
Resuming operations.
root@AIX2 [/newmountgpfs2]
# mmlssnapshot /dev/localgpfs2
Snapshots in file system localgpfs2:
Directory                SnapId    Status          Created                 
user01                   1         Valid           Thu Sep 27 13:19:26 2012
root@AIX2 [/newmountgpfs2]
#


#mmumount /dev/localgpfs2 -a


#mmrestorefs /dev/localgpfs2 user01


#mmdelsnapshot /dev/localgpfs2 user01



Create a FILESET


# mmcrfileset localgpfs2 localgpfs2fs02
Fileset 'localgpfs2fs02' created.
root@AIX2 [/newmountgpfs2]

# mmlinkfileset localgpfs2 localgpfs2fs02 -J /newmountgpfs2/fs02
Fileset 'localgpfs2fs02' linked at '/newmountgpfs2/fs02'.
root@AIX2 [/newmountgpfs2]
# df -k
Filesystem    1024-blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           524288    344996   35%    10337    12% /
/dev/hd2          2097152     35976   99%    41382    64% /usr
/dev/hd9var        524288    212616   60%     7333    14% /var
/dev/hd3           262144    240244    9%       83     1% /tmp
/dev/hd1           262144    245852    7%      152     1% /home
/dev/hd11admin      131072    130708    1%        5     1% /admin
/proc                   -         -    -         -     -  /proc
/dev/hd10opt       524288    237788   55%     8450    14% /opt
/dev/livedump      262144    261776    1%        4     1% /var/adm/ras/livedump
9.6.156.220:/export/nim/GPFS    33554432   9197468   73%    35923     2% /mnt
/dev/localgpfs1     2097152    812288   62%     4044     5% /newmountgpfs1
/dev/localgpfs2    32505856  31240960    4%     4049     7% /newmountgpfs2
root@AIX2 [/newmountgpfs2]
# cd /newmountgpfs2
root@AIX2 [/newmountgpfs2]
# ls -ltr
total 2097233
-rw-r--r--    1 root     system            0 Sep 26 15:33 filesnew
-rw-r--r--    1 root     system   1073741824 Sep 27 12:56 neeeee
drwxr-xr-x    2 user01   202            8192 Sep 27 13:18 user01
dr-xr-xr-x    2 root     system         8192 Sep 27 13:28 .snapshots
----------    1 root     system         1152 Sep 27 14:29 user.quota
----------    1 root     system         1664 Sep 27 14:29 group.quota
----------    1 root     system         1664 Sep 27 14:29 fileset.quota
drwx------    2 root     system         8192 Sep 27 14:29 fs02


 


Comments

Last Week Topics

How to Force The Database Open With `_ALLOW_RESETLOGS_CORRUPTION

This is an internal note from Oracle. Forcing The Database Open With `_ALLOW_RESETLOGS_CORRUPTION` with Automatic Undo Management ( Doc ID 283945.1 ) Warning The following instructions should only be used under the explicit direction of Oracle Support. These steps should only be used when all other conventional means of recovering the database have failed. Please note that there is no guarantee that this method will succeed. IF THE STEPS BELOW DO ALLOW YOU TO OPEN YOUR DATABASE THEN IT IS ESSENTIAL THAT THE DATABASE BE REBUILT AS IT IS NO LONGER SUPPORTED. FAILURE TO DO SO MAY LEAD TO DATA DICTIONARY INCONSISTENCIES, INTERNAL ERRORS AND CORRUPTIONS. ** Note: The steps here apply to Oracle 9i or higher and only and when Automatic Undo Management is being used. ** Steps to attempt to force the database open: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1) Backup the database while the database is closed. THE INSTRUCTIONS HERE ARE DESTRUCTIVE. YOU ARE STRONGLY A

HOW TO SHARE SAMBA SHARE FROM WINDOWS TO SOLARIS 11

SHARE WINDOWS FOLDER WITH SAMBA IN SOLARIS 11 OPEN THOSE PORT IF YOU HAVE A FIREWALL BETWEEN SERVERS PORT    STATE SERVICE 135/tcp open  msrpc 139/tcp open  netbios-ssn 445/tcp open  microsoft-ds 137 UDP 138 UDP INSTALL SAMBA PACKAGES #pkg install samba ENABLE EACH SERVICES AFTER INSTALLING SAMBA AND CONFIGURE THE SMB.CONF #svcadm enable idmap #svcadm enable smb/client #svcadm enable samba root@:/# vi /etc/samba/smb.conf [ftps]   path = //april.domaintest/FTPS   realm = april.domaintest.com   netbios name = april   passdb backend = YourSharingPassword   guest account = SAMBAUX   log file = /var/samba/log/%m.log   load printers = No   wins server = YourWinServer    winbind trusted domains only = No   workgroup = domaintest.com   hosts allow = 192.168.1.10    TEST THE CONNECTION WITH WINDOWS SAMBA SERVER SHARE     root@:/#  smbclient -L //april/FTPS/ -s /etc/samba/smb.conf -N Anonymous login successful         Sharename       Ty

HOW TO ENABLE A VIRTUAL INTERFACE (VNIC) SOLARIS 10

HOW TO ENABLE A VIRTUAL INTERFACE (VNIC) SOLARIS 10 1.-Verify the interfaces on the server that you need to add the ip in this example 10.1.1.8 # dladm show-phys LINK CLASS MTU STATE OVER bge0 phys 1500 unknown -- bge1 phys 1500 up --    2.-Now you need to create a virtual network interface or VNIC on the server #ifconfig bge1:1 plumb #ifconfig -a 3.-Finally you can add the new ip address and add on the server in /etc/hostname.bge1:1 the IP or the name that you defined on the hosts file with that ip #vi /etc/hostname.bge1:1 10.1.1.8 #ifconfig bge1:1 10.1.1.8 netmask 255.255.255.0 broadcast 10.1.1.254 up Regards Roger    

HOW TO CHANGE HOSTNAME RED HAT LINUX

HOW TO CHANGE HOSTNAME RED HAT LINUX 1.-Validate Hostname and host file that you need to change #hostname rhel #cat /etc/hosts 127.0.0.1  localhost 192.168.1.13  rhel 2.-Edit the following file in order to change HOSTNAME #vi /etc/sysconfig/network NETWORKING=yes HOSTNAME=TEST GATEWAY=192.168.1.1 3.-When you are ready and you save the information you will need to edit the hosts file #vi /etc/hosts 127.0.0.1  localhost 192.168.1.13  test 4.- Finally you will need to restart de network services #service network restart #hostname test

How to reorganize tables with brspace commands.

Brspace use internally Oracle DBMS_REDEFINITION. If you have SAP with Oracle, this is a very fast way to reorganize object in Oracle Database. In this example we will organize simultaneously S562,MLPPF and MLCRP tables. Important : If you want to reorganize various tables and indexes, these must reside in same tablespace. 1- Tables reorganization. brspace -p /oracle/PRD/102_64/dbs/initPRD.sap -c force -s 20 -l E -f tbreorg -a reorg -s PSAPSR3 -o SAPSR3 -t "S562,MLPPF,MLCRP" -n PSAPSR3 -e 16 -p 16 -m online *  /oracle/PRD/102_64/dbs/initPRD.sap : SAP Parameter file * PSAPSR3 : Source tablespace * SAPSR3  : Table owner * PSAPSR3 :  Destiny tablespace. * -e 16 -p 16 -m :  It indicates how many parallel processes that will perform the operation,in this case are 16. * online : It indicates that the reorganization of the tables will be made ONLINE 2- After tables reorganization you will need to rebuild the S562,MLPPF and MLCRP indexes tables . brspac

How to install Oracle Directory Server 11 Solaris 10

Createl DSCC Registry that is   Directory Server Manager for LDAP server administration root@ldapserv1:/opt/ODSEE_ZIP_Distribution/dsee7/bin# ./dsccsetup ads-create Choose password for Directory Service Manager: Confirm password for Directory Service Manager: Creating DSCC registry... DSCC Registry has been created successfully Deploy the directory server root@ldapserv1:/opt/ODSEE_ZIP_Distribution/dsee7/bin# ./dsccsetup war-file-create Created /opt/ODSEE_ZIP_Distribution/dsee7/var/dscc7.war 1636 /opt/dsInst Choose the Directory Manager password: <Password Directory Manager> Confirm the Directory Manager password: <Password Directory Manager> Starting the instance created with dsadm Use command 'dsadm start '/opt/dsInst'' to start the instance oot@ldapserv1:/opt/ODSEE_ZIP_Distribution/dsee7/bin# ./dsadm start '/opt/dsInst' Directory Server instance '/opt/dsInst' started: pid=19325 Create the suffix   and port that will be used,

How to break a bonded network interface red hat

1.- Bonding device called bond0 which aggregated by eth0 and eth1 # ifconfig bond0     Link encap:Ethernet  HWaddr 44:a8:42:5d:6d:5d           inet addr:192.168.1.51  Bcast:192.168.1.255  Mask:255.255.255.0           inet6 addr: fe80::5054:ff:fe4d:9004/64 Scope:Link           UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1 eth0      Link encap:Ethernet  HWaddr 44:a8:42:5d:6d:5d           UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1 eth2      Link encap:Ethernet  HWaddr 44:a8:42:5d:76:29           UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1           RX packets:6 errors:0 dropped:0 overruns:0 frame:0 # cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: em1 (primary_reselect always) Currently Active Slave: em1 MII Status: up MII Polling Interval (ms): 50 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: 10000

Solaris 11.3 Bug# 21207532 System or DB hang in zil_commit() at shutdown

The issue is generated due to ZFS bug present in version lower than 11.3.2.4.0, those symptoms produce that some errors related with IO wait are generated in the trace log file of the Oracle Data Base, shutting down the mmon process as you see below Trace log example: Thu Sep 01 08:51:11 2016 WARNING: aiowait timed out 2 times Thu Sep 01 08:57:38 2016 minact-scn: got error during useg scan e:12751 usn:4 minact-scn: useg scan erroring out with error e:12751 Suspending MMON action 'Block Cleanout Optim, Undo Segment Scan' for 82800 seconds Thu Sep 01 09:01:11 2016 WARNING: aiowait timed out 3 times Thu Sep 01 09:07:20 2016 Suspending MMON action 'undo usage' for 82800 seconds Thu Sep 01 09:11:11 2016 WARNING: aiowait timed out 4 times Thu Sep 01 09:12:03 2016 Shutting down instance (immediate) Stopping background process SMCO Thu Sep 01 09:12:34 2016 Background process SMCO not dead after 30 seconds Killing background process SMCO Shutting

HOW TO INSTALL RSYNC SOLARIS 10 FOR SPARC PLATFORMS

HOW TO INSTALL RSYNC 3.1.1 SOLARIS 10 1.-Download the following packages   libintl-3.4.0-sol10-sparc-local.gz rsync.3.1.1.SPARC.32bit.Solaris.10.pkg libiconv.1.14.SPARC.32bit.Solaris.10.pkg   2.-Once you have the packages on the server you can do the following steps   libintl-3.4.0-sol10-sparc-local.gz rsync.3.1.1.SPARC.32bit.Solaris.10.pkg libiconv.1.14.SPARC.32bit.Solaris.10.pkg    3.-  Unzip the packages that are compressed and install them with pkgadd. #gunzip libintl-3.4.0-sol10-sparc-local.gz #pkgadd –d libintl-3.4.0-sol10-sparc-local   #pkgadd –d  rsync.3.1.1.SPARC.32bit.Solaris.10.pkg #pkgadd –d libiconv.1.14.SPARC.32bit.Solaris.10.pkg And now you can use the rsync Regards Roger