Tuesday, December 22, 2015

File - Object Unified Access


What is unified-access ?


Unified file and object access allows use cases where you can access data using object as well as file interfaces. For example: If a user ingests a file from the SMB interface then users with valid access rights can access that file from the object interface. On the other hand, if a user ingests a object from object interface then users with valid access rights can access that file from file interface.
 

Why this post ?

  1. Configuration of Spectrum Scale for unified access
  2. Demo of unified access.
Prerequisite: Spectrum Scale 4.2+ should be installed.

Details of cluster which I'll be using for demo::

[root@vwnode4 ~]# mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         vwnode.gpfscluster
  GPFS cluster id:           XXXX548474453088585
  GPFS UID domain:           vwnode.gpfscluster
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR

 Node  Daemon node name  IP address    Admin node name  Designation
--------------------------------------------------------------------
   1   vwnode0           XX.XX.100.110  vwnode0          quorum-perfmon
   2   vwnode1           XX.XX.100.111  vwnode1          quorum-perfmon
   3   vwnode2           XX.XX.100.112  vwnode2          quorum-perfmon
   4   vwnode3           XX.XX.100.113  vwnode3          manager-perfmon
   5   vwnode4           XX.XX.100.114  vwnode4          manager-perfmon

User authentication details

[root@vwnode4 ~]# mmuserauth service list
FILE access configuration : LDAP
PARAMETERS               VALUES
-------------------------------------------------
ENABLE_SERVER_TLS        false
ENABLE_KERBEROS          false
USER_NAME                cn=manager,dc=example,dc=com
SERVERS                  XX.XX.46.17
NETBIOS_NAME             st001
BASE_DN                  dc=example,dc=com
USER_DN                  none
GROUP_DN                 none
NETGROUP_DN              none
USER_OBJECTCLASS         posixAccount
GROUP_OBJECTCLASS        posixGroup
USER_NAME_ATTRIB         cn
USER_ID_ATTRIB           uid
KERBEROS_SERVER          none
KERBEROS_REALM           none

OBJECT access configuration : LDAP
PARAMETERS               VALUES
-------------------------------------------------
ENABLE_ANONYMOUS_BIND    false
ENABLE_SERVER_TLS        false
ENABLE_KS_SSL            false
USER_NAME                cn=manager,dc=example,dc=com
SERVERS                  XX.XX.46.17
BASE_DN                  dc=example,dc=com
USER_DN                  ou=people,dc=example,dc=com
USER_OBJECTCLASS         posixAccount
USER_NAME_ATTRIB         cn
USER_ID_ATTRIB           uid
USER_MAIL_ATTRIB         mail
USER_FILTER              none
ENABLE_KS_CASIGNING      false
KS_ADMIN_USER            ldapuser3


Configuration of Unified Access


Step 1: Enable the file-access object capability from any protocol node

[root@vwnode4 ~]# mmobj config change --ccrfile spectrum-scale-object.conf --section capabilities --property file-access-enabled --value true

To validate whether unified access is enable you can check status ibmobjectizer service.
If unified access is enabled ibmobjectizer must be running on exactly one node.

[root@vwnode4 ~]# mmces service list -v --all | grep ibmobjectizer
vwnode3:        OBJ:ibmobjectizer                            is running

Step 2: For this demo, I am using unified_mode for authentication.
In unified_mode users from object and file are expected to be common and coming from the same directory service (Note that I have LDAP user authentication configure for both object and file).
Check this for more information.

[root@vwnode4 ~]# mmobj config change --ccrfile object-server-sof.conf --section DEFAULT --property id_mgmt --value unified_mode

Step3: Create policy for unified access.
Following command will create policy with name 'swiftOnFile' with unified access enabled.

[root@vwnode4 ~]# mmobj policy create swiftOnFile --enable-file-access
[I] Getting latest configuration from ccr
[I] Creating fileset /dev/cesSharedRoot:obj_swiftOnFile
[I] Creating new unique index and building the object rings
[I] Updating the configuration
[I] Uploading the changed configuration


Let's check our freshly created policy for unified access.

[root@vwnode4 ~]# mmobj policy list

Index       Name         Default Deprecated Fileset           Functions
-------------------------------------------------------------------------------------
0           SwiftDefault yes                my_object_fileset
56921512210 swiftOnFile                     obj_swiftOnFile   file-and-object-access

You can make this policy default, though it is optional.

[root@vwnode4 ~]# mmobj policy change swiftOnFile --default


Demo of Unified Access


Now let's create a container and add a file in it.
I am going to use Swift Explorer for this.
If you are new to Swift Explorer please check my previous post to configure Swift Explorer -
Accessing Spectrum Scale Object Store using Swift Explorer

Create a container :

 

Upload a file :

  




Let's check where this file is uploaded on server.

[root@vwnode4 ~]# ls -l /ibm/cesSharedRoot/obj_swiftOnFile/s56921512210z1device1/AUTH_2de13f0dae4747b484ed06bc31b29835/unified_access
total 0
-rwxr-xr-x. 1 ldapuser3 ldapuser3 11 Dec 21 09:37 file1.txt

Explanation for path :

/ibm/cesSharedRoot      -- Mount point for GPFS file system
obj_swiftOnFile         -- Policy create CLI creates a directory depending upon your policy name
s56921512210z1device1   -- 's' followed by policy index followed by fixed suffix 'z1device1'  
AUTH_2de13f0dae4747b484ed06bc31b29835 -- Unique ID for a tenet with fixed prefix 'AUTH_'
unified_access          -- Name of container
Let's export this container with NFS check this file from file interface.

[root@vwnode4 ~]# mmnfs export add /ibm/cesSharedRoot/obj_swiftOnFile/s56921512210z1device1/AUTH_2de13f0dae4747b484ed06bc31b29835/unified_access/ -c "*(Access_Type=RW,SecType=sys,Squash=NoIdSquash,Protocols=3:4)"
[root@vwnode4 ~]# mmnfs export list
Path                                                                                                          Delegations Clients
----------------------------------------------------------------------------------------------------------------------------------
/ibm/cesSharedRoot/obj_swiftOnFile/s56921512210z1device1/AUTH_2de13f0dae4747b484ed06bc31b29835/unified_access none        * 
 


Let mount it on some other machine --

[root@localhost ~]# mount -t nfs -o vers=3 viknode:/ibm/cesSharedRoot/obj_swiftOnFile/s56921512210z1device1/AUTH_2de13f0dae4747b484ed06bc31b29835/unified_access /mnt/

Let's check our 'file1.txt'

[root@localhost ~]# ls -ln /mnt/
total 0
-rwxr-xr-x. 1 1002 1002 29 Dec 22 11:41 file1.txt
[root@localhost ~]# id ldapuser3
uid=1002(ldapuser3) gid=1002(ldapuser3) groups=1002(ldapuser3)

Now try to ingest a file from NFS and try to get it from object interface

[root@localhost ~]# su ldapuser3
bash-4.2$ echo "NFS Create File" > /mnt/nfs_file.txt
bash-4.2$ ls /mnt/
file1.txt  nfs_file.txt

Let's check this new file from object interface.


You can get more information about unified access here.

Upgrading from 4.1.1 to 4.2

Why this post ?


Upgrading spectrum scale is as easy as pie. In this post I demonstrate how to upgrade Spectrum Scale from 4.1.1 to 4.2.0.

Where to start ?

Let's start with a Spectrum Scale 4.1.1 cluster.

Here is my 4.1.1 Cluster details -

[root@vwnode1 ~]# mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         vwnode.gpfscluster
  GPFS cluster id:           1175018713363073732
  GPFS UID domain:           vwnode.gpfscluster
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR

 Node  Daemon node name  IP address    Admin node name  Designation
--------------------------------------------------------------------
   1   vwnode1           XXX.XXX.100.111  vwnode1          quorum
   2   vwnode2           XXX.XXX.100.112  vwnode2          quorum
   3   vwnode3           XXX.XXX.100.113  vwnode3          quorum-manager
   4   vwnode4           XXX.XXX.100.114  vwnode4          manager


Let's upgrade


Extract brand new Spectrum Scale ( 4.2.0.1 in my case )


[root@vwnode1 ~]# ./Spectrum_Scale_install-4.2.0.1_x86_64 --silent > /dev/null



You can either fill-up all cluster definition using 'specturmscale' command or just copy old cluster definition file if there is no change in cluster since installation.


[root@vwnode1 images]# cp 
/usr/lpp/mmfs/4.1.1/installer/configuration/clusterdefinition.txt 
/usr/lpp/mmfs/4.2.0.1/installer/configuration/clusterdefinition.txt
cp: overwrite â/usr/lpp/mmfs/4.2.0.1/installer/configuration/clusterdefinition.txtâ? y
[root@vwnode1 images]# 

 You will see lots of warnings if you check for object parameters

[root@vwnode1 images]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale config object
[ INFO  ] No changes made. Current settings are as follows:
[ INFO  ] Object File System Name is cesSharedRoot
[ INFO  ] Object File System Mountpoint is /ibm/cesSharedRoot
[ INFO  ] Endpoint Hostname is viknode
[ WARN  ] No value for GPFS Object Base in clusterdefinition file.
[ WARN  ] No value for GPFS Fileset inode allocation in clusterdefinition file.
[ WARN  ] No value for Admin Token in clusterdefinition file.
[ WARN  ] No value for Admin User in clusterdefinition file.
[ WARN  ] No value for Admin Password in clusterdefinition file.
[ WARN  ] No value for Swift User in clusterdefinition file.
[ WARN  ] No value for Swift Password in clusterdefinition file.
[ WARN  ] No value for Database Password in clusterdefinition file.
[ INFO  ] Enable S3 is off
[ WARN  ] No value for Multi-region Data File Path in clusterdefinition file.
[ WARN  ] No value for Region Number for Multi-region in clusterdefinition file.



We need to specify following parameters in order to configure upgrade.

ap  -- Specify the password for the Admin User
dp  -- Specify the password for object database
i     -- Specify the GPFS fileset inode allocation to be used
o    -- Specify the GPFS fileset to be created/used as the


[root@vwnode1 ~]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale config object -o my_object_fileset -ap Passw0rd -dp Passw0rd -i 10000
[ INFO  ] Setting GPFS Object Base to my_object_fileset
[ INFO  ] Setting GPFS Fileset inode allocation to 10000
Enter the secret encryption key:
Repeat the secret encryption key:
[ INFO  ] Setting Admin Password
[ INFO  ] Setting Database Password


We also need to configure perform monitoring.
You can either enable or disable performance monitoring as per your requirement.


[root@vwnode1 ~]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale config perfmon -r on
[ INFO  ] Setting Enable Performance Monitoring reconfiguration to on


We should do precheck before upgrade.


[root@vwnode1 ~]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale upgrade -pr


If upgrade pre-check is successful,  we are ready for upgrade.
Just fire following command which will do upgrade -


[root@vwnode1 ~]# /usr/lpp/mmfs/4.2.0.1/installer/spectrumscale upgrade

Done !

Let's confirm using 'mmfsadm'

[root@vwnode4 ~]# mmfsadm dump version | head -1
Build branch "4.2.0.1 ".


Wednesday, July 22, 2015

Accessing Spectrum Scale Object Store using Swift Explorer

What this post is about ?



   As title indicates this post is about configuring Swift Explorer with Spectrum Scale object store.

What we will need ?


1) A cluster with spectrum scale installed
2) Object service given by spectrum scale should be enable and configured with keystone
3) A client on which you can install Swift Explorer.
4) Your cluster should be pingable by client.


How to use Swift Explorer ?


1. Download Swift Explorer.
2. Install application.
3. Connect it to object store with proper authentication.

And that's it !! You are ready to go..

Let's me show you how I configure Swift Explorer for IBM Spectrum Scale -

My Spectrum Scale cluster configuration -


[root@viknode3 ~]# mmlscluster --ces

GPFS cluster information
========================
  GPFS cluster name:         viknode.gpfscluster
  GPFS cluster id:           18080921923631760149

Cluster Export Services global parameters
-----------------------------------------
  Shared root directory:                /ibm/cesSharedRoot/ces
  Enabled Services:                     SMB NFS OBJ
  Log level:                            0
  Address distribution policy:          even-coverage

 Node  Daemon node name            IP address       CES IP address list
-----------------------------------------------------------------------
   2   viknode3                    XX.XX.100.73      XX.XX.100.76(object_database_node,object_singleton_node)
   3   viknode4                    XX.XX.100.74      XX.XX.100.75


All you need to access Spectrum Scale object store is keystone v2.0 endpoints.

If you don't have it already you can create it using following commands :


# openstack endpoint create --region regionOne identity internal http://viknode3:35357/v2.0
# openstack endpoint create --region regionOne identity public http://viknode3:5000/v2.0
# openstack endpoint create --region regionOne identity admin http://viknode3:35357/v2.0

Endpoints for Spectrum Scale on my machine looks like -



[root@viknode3 ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                        |
+----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------------------+
| 0413ec4c2bed446bb0235dd19167b55a | None      | keystone     | identity     | True    | public    | http://viknode3:5000/v3                    |
| e33971bc36ee43529cf86a06e07d1ef8 | None      | keystone     | identity     | True    | internal  | http://viknode3:35357/v3                   |
| 2dcdc8a16eea4e49884309f3b4f133ce | None      | keystone     | identity     | True    | admin     | http://viknode3:35357/v3                   |
| 8c7751fb59e6402a88609ce221bd155f | regionOne | keystone     | identity     | True    | internal  | http://viknode3:35357/v2.0                 |
| fc49659bff0343baa96f02f157c36609 | regionOne | keystone     | identity     | True    | public    | http://viknode3:5000/v2.0                  |
| c933034ee2344292bc9e67adb4713f69 | regionOne | keystone     | identity     | True    | admin     | http://viknode3:35357/v2.0                 |
| 0439ebec239747e9ae008c8030ab8ec1 | regionOne | swift        | object-store | True    | public    | http://viknode3:8080/v1/AUTH_%(tenant_id)s |
| 8bfd1aba25ff460b84fb610b00013a3c | regionOne | swift        | object-store | True    | internal  | http://viknode3:8080/v1/AUTH_%(tenant_id)s |
| 169414a767834dd3a9c44527bbcab5c9 | regionOne | swift        | object-store | True    | admin     | http://viknode3:8080                       |
+----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------------------+

Configuration for Swift Explorer -

You can download Swift Explorer from it here as per client machine requirements.



Install the application

(I am using windows client)


Open the application once you are done with installation.



Click on "Account" and select "Keystone Login"
It will popup a form for keystone credentials as shown in image.
Fill the credentials for keystone and click on "Ok"


If your credentials are correct it will open the explorer when you can see the containers and objects in that container