Wednesday, March 16, 2016

File Placement Optimizer (FPO) setup with Spectrum Scale 4.2

What is File Placement Optimizer (FPO) ?

 

         GPFS File Placement Optimizer (FPO) is a set of features that allow GPFS to operate efficiently in a system based on a shared nothing architecture. It is particularly useful for "big data" applications that process massive amounts of data.
         

Why this post ?

 

          Spectrum Scale 4.2 installer toolkit does not support FPO configuration enabling at the time of installation. This blog will provide you step by step guide for configuring FPO setup with spectrum scale installer toolkit.

Where to start ?

 

               Let's start with extracting and configuring spectrum scale installer toolkit similar to the regular setup. Here are details of my setup which I had given to spectrum scale installer toolkit. If you are looking for more help about spectrum scale installer toolkit then you will find it here - Overview of the spectrumscale installation toolkit

[root@viknode1 installer]# ./spectrumscale node list
[ INFO  ] List of nodes in current configuration:
[ INFO  ] [Installer Node]
[ INFO  ] 10.0.100.71
[ INFO  ]
[ INFO  ] [Cluster Name]
[ INFO  ] vwnode.gpfscluster
[ INFO  ]
[ INFO  ] [Protocols]
[ INFO  ] Object : Enabled
[ INFO  ] SMB : Enabled
[ INFO  ] NFS : Enabled
[ INFO  ]
[ INFO  ] GPFS Node Admin  Quorum  Manager  NSD Server  Protocol  GUI Server
[ INFO  ] viknode1   X       X                  X
[ INFO  ] viknode2           X                  
[ INFO  ] viknode3           X                  
[ INFO  ] viknode4                   X                     X
[ INFO  ] viknode5                   X                     X
[ INFO  ]
[ INFO  ] [Export IP address]
[ INFO  ] 10.0.100.76 (pool)
[ INFO  ] 10.0.100.77 (pool)
[root@viknode1 installer]# ./spectrumscale nsd list
[ INFO  ] Name FS            Size(GB) Usage           FG Pool    Device        Servers
[ INFO  ] nsd1 cesSharedRoot unknown  dataAndMetadata 1  Default /dev/dm-2     [viknode1]

Here I have added one NSD which will be required by cesSharedRoot filesystem.
The CES shared root (cesSharedRoot) is needed for storing CES shared configuration data, protocol recovery, and for some other protocol specific purpose.
Here is a high level diagram for this setup -


(Click on diagram to enlarge)
Let's run install command to install basic GPFS packages and GPFS commands.

[root@viknode1 installer]# ./spectrumscale install


Configuring NSDs for FPO 

 

               Configuring NSDs is more or less everything about FPO. According to IBM's official documentation, it is recommended that GPFS FPO configuration has two storage pools, a system pool for metadata only and a data pool. On my setup I will be creating three storage pools. A fast storage pool and a slow storage pool and a system storage pool. Fast storage pool, let's say, have all SSDs and other fast disks; a slow storage pool, let's say, have all HDDs and other slow disks; and a pool named 'system' for storing metadata.
Storage pool:
 
Storage pool stanzas are used to specify the type of layout map and write affinity depth, and to enable write affinity, for each storage pool.
Storage pool stanzas have the following format:

%pool: 
  pool=StoragePoolName  # name of the storage pool.
  blockSize=BlockSize  # the block size of the disks in the storage pool.
  usage={dataOnly | metadataOnly | dataAndMetadata}  # the type of data to be stored in the storage pool.
  layoutMap={scatter | cluster}  # The block allocation map type cannot be changed after the storage pool has been created.
  allowWriteAffinity={yes | no}  # Indicates whether the IBM Spectrum Scale File Placement Optimizer (FPO) feature is to be enabled for the storage pool.
  writeAffinityDepth={0 | 1 | 2}  # Specifies the allocation policy to be used by the node writing the data. It is also used for FPO-enabled pools.
  blockGroupFactor=BlockGroupFactor  # Specifies how many file system blocks are laid out sequentially on disk to behave like a single large block. This option only works on FPO enabled pools, where --allow-write-affinity is set for the data pool. 

For more details check Planning for IBM Spectrum Scale FPO
NSD:

          Every local disk to be used by GPFS must have a matching entry in the disk file.
          NSD stanzas have this format:
Storage pool stanzas have the following format:

%nsd:
  device=DiskName  # device name that appears in /dev
  nsd=NsdName  # name of the NSD to be created
  servers=ServerList  # comma-separated list of NSD server nodes
  usage={dataOnly | metadataOnly | dataAndMetadata | descOnly | localCache}  # disk usage
  failureGroup=FailureGroup  # the failure group to which this disk belongs
  pool=StoragePool  # the name of the storage pool to which the NSD is assigned

On my setup I have three virtual files on three machines which I'll use as disks.

[root@viknode1 ~]# ls /dev/dm-3
/dev/dm-3
[root@viknode2 ~]# ls /dev/dm-4
/dev/dm-4
[root@viknode3 ~]# ls /dev/dm-5
/dev/dm-5

Now let's create a new Staza File in /tmp

[root@viknode1 ~]# cat /tmp/newStanzaFile
%pool:
pool=fast
layoutMap=cluster
blocksize=1024K
allowWriteAffinity=yes  # this option enables FPO feature
writeAffinityDepth=1  # place 1st copy on disks local to the node writing data
blockGroupFactor=128  # Defines chunk size of 128MB

%pool:
pool=slow
layoutMap=cluster
blocksize=1024K
allowWriteAffinity=yes  # this option enables FPO feature
writeAffinityDepth=1  # place 1st copy on disks local to the node writing data
blockGroupFactor=128  # Defines chunk size of 128MB

#Disks in system pool are defined for metadata
%nsd:
nsd=nsd1
device=/dev/dm-3
servers=viknode1
usage=metadataOnly
failureGroup=101
pool=system

# Disks in fast pool
%nsd:
nsd=nsd2
device=/dev/dm-4
servers=viknode2
usage=dataOnly
failureGroup=102
pool=fast

# Disk(s) in slow pool
%nsd:
nsd=nsd3
device=/dev/dm-5
servers=viknode3
usage=dataOnly
failureGroup=103
pool=slow

Here, I have three pools -
1) System pool - Created by default by installer toolkit. I will use it to store metadata.
2) Fast pool - For fast disks. Use to store data.
3) Slow pool - For slow disks. Use to store data.

Lets create these NSDs

[root@viknode1 ~]# mmcrnsd -F /tmp/newStanzaFile

Creating NSDs is async process.
After NSDs are created you can check them using mmlsnsd command.

[root@viknode1 ~]# mmlsnsd

 File system   Disk name    NSD servers
---------------------------------------------------------------------------
 (free disk)   nsd1         viknode1
 (free disk)   nsd2         viknode1
 (free disk)   nsd3         viknode2
 (free disk)   nsd4         viknode3

Now we are going to create a gpfs file system on these NSDs. I am going with all default parameters but you can tune the parameters as per your requirement. Here is guide to mmcrfs command.

[root@viknode1 ~]#  mmcrfs gpfs0 -F /tmp/newStanzaFile -T /ibm/gpfs0

Ones file system is created then you can check it with mmlsfs command.

[root@viknode1 installer]# mmlsfs all

File system attributes for /dev/gpfs0:
======================================
flag                value                    description
------------------- ------------------------ -----------------------------------
 -f                 8192                     Minimum fragment size in bytes (system pool)
                    32768                    Minimum fragment size in bytes (other pools)
 -i                 4096                     Inode size in bytes
 -I                 16384                    Indirect block size in bytes
 -m                 1                        Default number of metadata replicas
 -M                 2                        Maximum number of metadata replicas
 -r                 1                        Default number of data replicas
 -R                 2                        Maximum number of data replicas
 -j                 cluster                  Block allocation type
 -D                 nfs4                     File locking semantics in effect
 -k                 nfs4                     ACL semantics in effect
 -n                 32                       Estimated number of nodes that will mount file system
 -B                 262144                   Block size (system pool)
                    1048576                  Block size (other pools)
 -Q                 none                     Quotas accounting enabled
                    none                     Quotas enforced
                    none                     Default quotas enabled
 --perfileset-quota No                       Per-fileset quota enforcement
 --filesetdf        No                       Fileset df enabled?
 -V                 15.01 (4.2.0.0)          File system version
 --create-time      Thu Apr  7 08:06:30 2016 File system creation time
 -z                 No                       Is DMAPI enabled?
 -L                 4194304                  Logfile size
 -E                 Yes                      Exact mtime mount option
 -S                 No                       Suppress atime mount option
 -K                 whenpossible             Strict replica allocation option
 --fastea           Yes                      Fast external attributes enabled?
 --encryption       No                       Encryption enabled?
 --inode-limit      65792                    Maximum number of inodes
 --log-replicas     0                        Number of log replicas
 --is4KAligned      Yes                      is4KAligned?
 --rapid-repair     Yes                      rapidRepair enabled?
 --write-cache-threshold 0                   HAWC Threshold (max 65536)
 -P                 system;fast;slow         Disk storage pools in file system
 -d                 nsd2;nsd3;nsd4           Disks in file system
 -A                 yes                      Automatic mount option
 -o                 none                     Additional mount options
 -T                 /ibm/gpfs0               Default mount point
 --mount-priority   0                        Mount priority

You can check storage pools with mmlspool command.

[root@viknode1 installer]# mmlspool gpfs0 all -L
Pool:
  name                   = system
  poolID                 = 0
  blockSize              = 256 KB
  usage                  = metadataOnly
  maxDiskSize            = 98 GB
  layoutMap              = cluster
  allowWriteAffinity     = no
  writeAffinityDepth     = 0
  blockGroupFactor       = 1

Pool:
  name                   = fast
  poolID                 = 65537
  blockSize              = 1024 KB
  usage                  = dataOnly
  maxDiskSize            = 64 GB
  layoutMap              = cluster
  allowWriteAffinity     = yes
  writeAffinityDepth     = 1
  blockGroupFactor       = 128

Pool:
  name                   = slow
  poolID                 = 65538
  blockSize              = 1024 KB
  usage                  = dataOnly
  maxDiskSize            = 64 GB
  layoutMap              = cluster
  allowWriteAffinity     = yes
  writeAffinityDepth     = 1
  blockGroupFactor       = 128

'allowWriteAffinity = yes' in above output shows disks in pool are enabled for FPO.
Let's mount this file system on all nodes.

[root@viknode1 ~]# mmmount gpfs0 -a
Wed Mar 16 10:40:42 EDT 2016: mmmount: Mounting file systems ...
[root@viknode1 ~]# mmlsmount gpfs0
File system gpfs0 is mounted on 5 nodes.

Enable protocols as per your requirement.
Don't forget to mention correct filesystem and mount point for deploying protocols.

[root@viknode1 installer]# ./spectrumscale node list
[ INFO  ] List of nodes in current configuration:
[ INFO  ] [Installer Node]
[ INFO  ] 10.0.100.71
[ INFO  ]
[ INFO  ] [Cluster Name]
[ INFO  ] vwnode.gpfscluster
[ INFO  ]
[ INFO  ] [Protocols]
[ INFO  ] Object : Enabled
[ INFO  ] SMB : Enabled
[ INFO  ] NFS : Enabled
[ INFO  ]
[ INFO  ] GPFS Node Admin  Quorum  Manager  NSD Server  Protocol  GUI Server
[ INFO  ] viknode1   X       X                  X
[ INFO  ] viknode2           X                  X
[ INFO  ] viknode3           X                  X
[ INFO  ] viknode4                   X                     X
[ INFO  ] viknode5                   X                     X
[ INFO  ]
[ INFO  ] [Export IP address]
[ INFO  ] 10.0.100.76 (pool)
[ INFO  ] 10.0.100.77 (pool)
[root@viknode1 installer]# ./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot
[root@viknode1 installer]# ./spectrumscale config object -f gpfs0 -m /ibm/gpfs0

Now you can deploy protocols and your setup will be ready with FPO.

[root@viknode1 installer]# ./spectrumscale deploy

For more details here are recommended videos -
Spectrum Scale (GPFS) for Hadoop Technical Introduction (Part 1 of 2)
Spectrum Scale (GPFS) for Hadoop Technical Introduction (Part 2 of 2)

Tuesday, December 22, 2015

File - Object Unified Access


What is unified-access ?


Unified file and object access allows use cases where you can access data using object as well as file interfaces. For example: If a user ingests a file from the SMB interface then users with valid access rights can access that file from the object interface. On the other hand, if a user ingests a object from object interface then users with valid access rights can access that file from file interface.
 

Why this post ?

  1. Configuration of Spectrum Scale for unified access
  2. Demo of unified access.
Prerequisite: Spectrum Scale 4.2+ should be installed.

Details of cluster which I'll be using for demo::

[root@vwnode4 ~]# mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         vwnode.gpfscluster
  GPFS cluster id:           XXXX548474453088585
  GPFS UID domain:           vwnode.gpfscluster
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR

 Node  Daemon node name  IP address    Admin node name  Designation
--------------------------------------------------------------------
   1   vwnode0           XX.XX.100.110  vwnode0          quorum-perfmon
   2   vwnode1           XX.XX.100.111  vwnode1          quorum-perfmon
   3   vwnode2           XX.XX.100.112  vwnode2          quorum-perfmon
   4   vwnode3           XX.XX.100.113  vwnode3          manager-perfmon
   5   vwnode4           XX.XX.100.114  vwnode4          manager-perfmon

User authentication details

[root@vwnode4 ~]# mmuserauth service list
FILE access configuration : LDAP
PARAMETERS               VALUES
-------------------------------------------------
ENABLE_SERVER_TLS        false
ENABLE_KERBEROS          false
USER_NAME                cn=manager,dc=example,dc=com
SERVERS                  XX.XX.46.17
NETBIOS_NAME             st001
BASE_DN                  dc=example,dc=com
USER_DN                  none
GROUP_DN                 none
NETGROUP_DN              none
USER_OBJECTCLASS         posixAccount
GROUP_OBJECTCLASS        posixGroup
USER_NAME_ATTRIB         cn
USER_ID_ATTRIB           uid
KERBEROS_SERVER          none
KERBEROS_REALM           none

OBJECT access configuration : LDAP
PARAMETERS               VALUES
-------------------------------------------------
ENABLE_ANONYMOUS_BIND    false
ENABLE_SERVER_TLS        false
ENABLE_KS_SSL            false
USER_NAME                cn=manager,dc=example,dc=com
SERVERS                  XX.XX.46.17
BASE_DN                  dc=example,dc=com
USER_DN                  ou=people,dc=example,dc=com
USER_OBJECTCLASS         posixAccount
USER_NAME_ATTRIB         cn
USER_ID_ATTRIB           uid
USER_MAIL_ATTRIB         mail
USER_FILTER              none
ENABLE_KS_CASIGNING      false
KS_ADMIN_USER            ldapuser3


Configuration of Unified Access


Step 1: Enable the file-access object capability from any protocol node

[root@vwnode4 ~]# mmobj config change --ccrfile spectrum-scale-object.conf --section capabilities --property file-access-enabled --value true

To validate whether unified access is enable you can check status ibmobjectizer service.
If unified access is enabled ibmobjectizer must be running on exactly one node.

[root@vwnode4 ~]# mmces service list -v --all | grep ibmobjectizer
vwnode3:        OBJ:ibmobjectizer                            is running

Step 2: For this demo, I am using unified_mode for authentication.
In unified_mode users from object and file are expected to be common and coming from the same directory service (Note that I have LDAP user authentication configure for both object and file).
Check this for more information.

[root@vwnode4 ~]# mmobj config change --ccrfile object-server-sof.conf --section DEFAULT --property id_mgmt --value unified_mode

Step3: Create policy for unified access.
Following command will create policy with name 'swiftOnFile' with unified access enabled.

[root@vwnode4 ~]# mmobj policy create swiftOnFile --enable-file-access
[I] Getting latest configuration from ccr
[I] Creating fileset /dev/cesSharedRoot:obj_swiftOnFile
[I] Creating new unique index and building the object rings
[I] Updating the configuration
[I] Uploading the changed configuration


Let's check our freshly created policy for unified access.

[root@vwnode4 ~]# mmobj policy list

Index       Name         Default Deprecated Fileset           Functions
-------------------------------------------------------------------------------------
0           SwiftDefault yes                my_object_fileset
56921512210 swiftOnFile                     obj_swiftOnFile   file-and-object-access

You can make this policy default, though it is optional.

[root@vwnode4 ~]# mmobj policy change swiftOnFile --default


Demo of Unified Access


Now let's create a container and add a file in it.
I am going to use Swift Explorer for this.
If you are new to Swift Explorer please check my previous post to configure Swift Explorer -
Accessing Spectrum Scale Object Store using Swift Explorer

Create a container :

 

Upload a file :

  




Let's check where this file is uploaded on server.

[root@vwnode4 ~]# ls -l /ibm/cesSharedRoot/obj_swiftOnFile/s56921512210z1device1/AUTH_2de13f0dae4747b484ed06bc31b29835/unified_access
total 0
-rwxr-xr-x. 1 ldapuser3 ldapuser3 11 Dec 21 09:37 file1.txt

Explanation for path :

/ibm/cesSharedRoot      -- Mount point for GPFS file system
obj_swiftOnFile         -- Policy create CLI creates a directory depending upon your policy name
s56921512210z1device1   -- 's' followed by policy index followed by fixed suffix 'z1device1'  
AUTH_2de13f0dae4747b484ed06bc31b29835 -- Unique ID for a tenet with fixed prefix 'AUTH_'
unified_access          -- Name of container
Let's export this container with NFS check this file from file interface.

[root@vwnode4 ~]# mmnfs export add /ibm/cesSharedRoot/obj_swiftOnFile/s56921512210z1device1/AUTH_2de13f0dae4747b484ed06bc31b29835/unified_access/ -c "*(Access_Type=RW,SecType=sys,Squash=NoIdSquash,Protocols=3:4)"
[root@vwnode4 ~]# mmnfs export list
Path                                                                                                          Delegations Clients
----------------------------------------------------------------------------------------------------------------------------------
/ibm/cesSharedRoot/obj_swiftOnFile/s56921512210z1device1/AUTH_2de13f0dae4747b484ed06bc31b29835/unified_access none        * 
 


Let mount it on some other machine --

[root@localhost ~]# mount -t nfs -o vers=3 viknode:/ibm/cesSharedRoot/obj_swiftOnFile/s56921512210z1device1/AUTH_2de13f0dae4747b484ed06bc31b29835/unified_access /mnt/

Let's check our 'file1.txt'

[root@localhost ~]# ls -ln /mnt/
total 0
-rwxr-xr-x. 1 1002 1002 29 Dec 22 11:41 file1.txt
[root@localhost ~]# id ldapuser3
uid=1002(ldapuser3) gid=1002(ldapuser3) groups=1002(ldapuser3)

Now try to ingest a file from NFS and try to get it from object interface

[root@localhost ~]# su ldapuser3
bash-4.2$ echo "NFS Create File" > /mnt/nfs_file.txt
bash-4.2$ ls /mnt/
file1.txt  nfs_file.txt

Let's check this new file from object interface.


You can get more information about unified access here.

Upgrading from 4.1.1 to 4.2

Why this post ?


Upgrading spectrum scale is as easy as pie. In this post I demonstrate how to upgrade Spectrum Scale from 4.1.1 to 4.2.0.

Where to start ?

Let's start with a Spectrum Scale 4.1.1 cluster.

Here is my 4.1.1 Cluster details -

[root@vwnode1 ~]# mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         vwnode.gpfscluster
  GPFS cluster id:           1175018713363073732
  GPFS UID domain:           vwnode.gpfscluster
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR

 Node  Daemon node name  IP address    Admin node name  Designation
--------------------------------------------------------------------
   1   vwnode1           XXX.XXX.100.111  vwnode1          quorum
   2   vwnode2           XXX.XXX.100.112  vwnode2          quorum
   3   vwnode3           XXX.XXX.100.113  vwnode3          quorum-manager
   4   vwnode4           XXX.XXX.100.114  vwnode4          manager


Let's upgrade


Extract brand new Spectrum Scale ( 4.2.0.1 in my case )


[root@vwnode1 ~]# ./Spectrum_Scale_install-4.2.0.1_x86_64 --silent > /dev/null



You can either fill-up all cluster definition using 'specturmscale' command or just copy old cluster definition file if there is no change in cluster since installation.


[root@vwnode1 images]# cp 
/usr/lpp/mmfs/4.1.1/installer/configuration/clusterdefinition.txt 
/usr/lpp/mmfs/4.2.0.1/installer/configuration/clusterdefinition.txt
cp: overwrite â/usr/lpp/mmfs/4.2.0.1/installer/configuration/clusterdefinition.txtâ? y
[root@vwnode1 images]# 

 You will see lots of warnings if you check for object parameters

[root@vwnode1 images]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale config object
[ INFO  ] No changes made. Current settings are as follows:
[ INFO  ] Object File System Name is cesSharedRoot
[ INFO  ] Object File System Mountpoint is /ibm/cesSharedRoot
[ INFO  ] Endpoint Hostname is viknode
[ WARN  ] No value for GPFS Object Base in clusterdefinition file.
[ WARN  ] No value for GPFS Fileset inode allocation in clusterdefinition file.
[ WARN  ] No value for Admin Token in clusterdefinition file.
[ WARN  ] No value for Admin User in clusterdefinition file.
[ WARN  ] No value for Admin Password in clusterdefinition file.
[ WARN  ] No value for Swift User in clusterdefinition file.
[ WARN  ] No value for Swift Password in clusterdefinition file.
[ WARN  ] No value for Database Password in clusterdefinition file.
[ INFO  ] Enable S3 is off
[ WARN  ] No value for Multi-region Data File Path in clusterdefinition file.
[ WARN  ] No value for Region Number for Multi-region in clusterdefinition file.



We need to specify following parameters in order to configure upgrade.

ap  -- Specify the password for the Admin User
dp  -- Specify the password for object database
i     -- Specify the GPFS fileset inode allocation to be used
o    -- Specify the GPFS fileset to be created/used as the


[root@vwnode1 ~]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale config object -o my_object_fileset -ap Passw0rd -dp Passw0rd -i 10000
[ INFO  ] Setting GPFS Object Base to my_object_fileset
[ INFO  ] Setting GPFS Fileset inode allocation to 10000
Enter the secret encryption key:
Repeat the secret encryption key:
[ INFO  ] Setting Admin Password
[ INFO  ] Setting Database Password


We also need to configure perform monitoring.
You can either enable or disable performance monitoring as per your requirement.


[root@vwnode1 ~]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale config perfmon -r on
[ INFO  ] Setting Enable Performance Monitoring reconfiguration to on


We should do precheck before upgrade.


[root@vwnode1 ~]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale upgrade -pr


If upgrade pre-check is successful,  we are ready for upgrade.
Just fire following command which will do upgrade -


[root@vwnode1 ~]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale upgrade

Done !

Let's confirm using 'mmfsadm'

[root@vwnode4 ~]# mmfsadm dump version | head -1
Build branch "4.2.0.1 ".


Wednesday, July 22, 2015

Accessing Spectrum Scale Object Store using Swift Explorer

What this post is about ?



   As title indicates this post is about configuring Swift Explorer with Spectrum Scale object store.

What we will need ?


1) A cluster with spectrum scale installed
2) Object service given by spectrum scale should be enable and configured with keystone
3) A client on which you can install Swift Explorer.
4) Your cluster should be pingable by client.


How to use Swift Explorer ?


1. Download Swift Explorer.
2. Install application.
3. Connect it to object store with proper authentication.

And that's it !! You are ready to go..

Let's me show you how I configure Swift Explorer for IBM Spectrum Scale -

My Spectrum Scale cluster configuration -


[root@viknode3 ~]# mmlscluster --ces

GPFS cluster information
========================
  GPFS cluster name:         viknode.gpfscluster
  GPFS cluster id:           18080921923631760149

Cluster Export Services global parameters
-----------------------------------------
  Shared root directory:                /ibm/cesSharedRoot/ces
  Enabled Services:                     SMB NFS OBJ
  Log level:                            0
  Address distribution policy:          even-coverage

 Node  Daemon node name            IP address       CES IP address list
-----------------------------------------------------------------------
   2   viknode3                    XX.XX.100.73      XX.XX.100.76(object_database_node,object_singleton_node)
   3   viknode4                    XX.XX.100.74      XX.XX.100.75


All you need to access Spectrum Scale object store is keystone v2.0 endpoints.

If you don't have it already you can create it using following commands :


# openstack endpoint create --region regionOne identity internal http://viknode3:35357/v2.0
# openstack endpoint create --region regionOne identity public http://viknode3:5000/v2.0
# openstack endpoint create --region regionOne identity admin http://viknode3:35357/v2.0

Endpoints for Spectrum Scale on my machine looks like -



[root@viknode3 ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                        |
+----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------------------+
| 0413ec4c2bed446bb0235dd19167b55a | None      | keystone     | identity     | True    | public    | http://viknode3:5000/v3                    |
| e33971bc36ee43529cf86a06e07d1ef8 | None      | keystone     | identity     | True    | internal  | http://viknode3:35357/v3                   |
| 2dcdc8a16eea4e49884309f3b4f133ce | None      | keystone     | identity     | True    | admin     | http://viknode3:35357/v3                   |
| 8c7751fb59e6402a88609ce221bd155f | regionOne | keystone     | identity     | True    | internal  | http://viknode3:35357/v2.0                 |
| fc49659bff0343baa96f02f157c36609 | regionOne | keystone     | identity     | True    | public    | http://viknode3:5000/v2.0                  |
| c933034ee2344292bc9e67adb4713f69 | regionOne | keystone     | identity     | True    | admin     | http://viknode3:35357/v2.0                 |
| 0439ebec239747e9ae008c8030ab8ec1 | regionOne | swift        | object-store | True    | public    | http://viknode3:8080/v1/AUTH_%(tenant_id)s |
| 8bfd1aba25ff460b84fb610b00013a3c | regionOne | swift        | object-store | True    | internal  | http://viknode3:8080/v1/AUTH_%(tenant_id)s |
| 169414a767834dd3a9c44527bbcab5c9 | regionOne | swift        | object-store | True    | admin     | http://viknode3:8080                       |
+----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------------------+

Configuration for Swift Explorer -

You can download Swift Explorer from it here as per client machine requirements.



Install the application

(I am using windows client)


Open the application once you are done with installation.



Click on "Account" and select "Keystone Login"
It will popup a form for keystone credentials as shown in image.
Fill the credentials for keystone and click on "Ok"


If your credentials are correct it will open the explorer when you can see the containers and objects in that container