OneFS Password Security Policy

Among the slew of security enhancements introduced in OneFS 9.5 is the ability to mandate a more stringent password policy. This is required in order to comply with security requirements such as the US military STIG, which stipulates:

Requirement Description
Length An OS or network device must enforce a minimum 15-character password length.
Percentage An OS must require the change of at least 50% of the total number of characters when passwords are changed.
Position A network device must require that when a password is changed, the characters are changed in at least eight of the positions within the password.
Temporary password The OS must allow the use of a temporary password for system logons with an immediate change to a permanent password.

The OneFS password security architecture can be summarized as follows:

Within the OneFS security subsystem, authentication is handled in OneFS by LSASSD, the daemon used to service authentication requests for lwiod.

Component Description
LSASSD The local security authority subsystem service (LSASS) handles authentication and identity management as users connect to the cluster.
File provider The file provider includes users from /etc/password and groups from /etc/groups.
Local provider The local provider includes local cluster accounts like ‘anonymous’, ‘guest’, etc.
SSHD OpenSSH Daemon which provides secure encrypted communications between a client and a cluster node over an insecure network.
pAPI The OneFS Platform API (PAPI), which provides programmatic interfaces to OneFS configuration and management via a RESTful HTTPS service.

In OneFS AIMA, there are several different kinds of backend providers: Local provider, file provider, AD provider, NIS provider, etc. Each provider is responsible for the management of users and groups inside the provider. For OneFS password policy enforcement, the Local and File providers are the focus.

The local provider is based on an SamDB style file stored with prefix path of “/ifs/.ifsvar”, and its provider settings can be viewed by the following CLI syntax:

# isi auth local view System

On the other hand, the file provider is based on the FreeBSD spwd.db file, and its configuration can be viewed by the following CLI command:

# isi auth file view System

Each provider stores and manage its own users. For the local provider, ` isi auth users create’ CLI command will create a user inside the provider by default. However, for the file provider, there is no corresponding command. Instead,  the `pw` CLI command can be used to create a new file provider user.

After the user is created, the `isi auth users modify <USER>` CLI command can be used to change the attributes of the user for both the file and local providers. However, not all attributes are supported for both providers. For example, the file provider does not support password expiry.

 

The fundamental password policy CLI changes introduced in OneFS 9.5 are as follows:

Operation OneFS 9.5 Change Details
change-password Modified Needed to provide old password for changing so that we can calculate how many chars/percent changed
reset-password Added Generates a temp password that meets current password policy for user to login
set-password Deprecated Doesn’t need to provide old password

A user’s password can now be set, changed, and reset by either ‘root’ or ‘admin’. This is supported by the new ‘isi auth users change-password’ or ‘isi auth users reset-password’ CLI command syntax. The latter, for example, returns a temporary password and requires the user to change it on next login. After logging in with the temporary (albeit secure) password, OneFS immediately forces the user to change it:

# whoami
admin

# isi auth users reset-password user1
4$_x\d\Q6V9E:sH

# ssh user1@localhost
(user1@localhost) Password:
(user1@localhost) Your password has expired.
You are required to immediately change your password.
Changing password for user1
New password:
(user1@localhost) Re-enter password:
Last login: Wed May 17 08:02:47 from 127.0.0.1
PowerScale OneFS 9.5.0.0

# whoami
user1

Also in OneFS 9.5 and later, the CLI ‘isi auth local view system’ command sees the addition of four new fields:

  • Password Chars Changed
  • Password Percent Changed
  • Password Hash Type
  • Max Inactivity Days

For example:

# isi auth local view system
                    Name: System
                  Status: active
          Authentication: Yes
   Create Home Directory: Yes
 Home Directory Template: /ifs/home/%U
        Lockout Duration: Now
       Lockout Threshold: 0
          Lockout Window: Now
             Login Shell: /bin/zsh
            Machine Name:
        Min Password Age: Now
        Max Password Age: 4W
     Min Password Length: 0
    Password Prompt Time: 2W
     Password Complexity: -
 Password History Length: 0
  Password Chars Changed: 0
Password Percent Changed: 0
      Password Hash Type: NTHash
     Max Inactivity Days: 0

The following CLI command syntax configures OneFS to require a minimum password length of 15 characters, a 50% or greater change, and 8 or more characters to be altered for a successful password reset:

# isi auth local modify system --min-password-length 15 --password-chars-changed 8 --password-percent-changed 50

Next, a command is issued to create a new user, ‘user2’, with a 10 character password:

# isi auth users create user2 --password 0123456789

Failed to add user user1: The specified password does not meet the configured password complexity or history requirements

This attempt fails because the password does not meet the configured password criteria (15 chars, 50% change, 8 chars to be altered).

Instead, the password for the new account, ‘user2’, is set to an appropriate value: ie  ‘0123456789abcdef’. Also, the ‘–prompt-password-change’ flag is included to force the user to change their password on next login.

# isi auth users create user2 --password 0123456789abcdef –prompt-password-change 1

On logging in to the ‘user2’ account, OneFS immediately prompts for a new password. In the example below, the following non-compliant password (‘012345678zyxw’) is entered.

  • 0123456789abcdef -> 012345678zyxw = Failure

This returns an unsuccessful change attempt failure, since it does not meet the 15 character minimum:

# su user2
New password:
Re-enter password:
The specified password does not meet the configured password complexity requirements.
Your password must meet the following requirements:
  * Must contain at least 15 characters.
  * Must change at least 8 characters.
  * Must change at least 50% of characters.
New password:

Instead, a compliant password and successful change could be:

  • 0123456789abcdef -> 0123456zyxwvuts = Success

The following command can also be used to change the password for a user. For example, to update user2’s password:

# isi auth users change-password user2
Current password (hit enter if none):
New password:
Confirm new password:

If a non-compliant password is entered, the following error is returned:

Password change failed: The specified password does not meet the configured password complexity or history requirements

When employed, OneFS hardening automatically enforces security-based configurations. The hardening engine is profile-based, and its STIG security profile is predicated on security mandates specified in the US Department of Defense (DoD) Security Requirements Guides (SRGs) and Security Technical Implementation Guides (STIGs).

On applying the STIG hardening security profile to a cluster (‘isi hardening apply –profile=STIG’), the password policy settings are automatically reconfigured to the following values:

Field Normal Value STIG Hardened
Lockout Duration Now Now
Lockout Threshold 0 3
Lockout Window Now 15m
Min Password Age Now 1D
Max Password Age 4W 8W4D
Min Password Length 0 15
Password Prompt Time 2W 2W
Password Complexity lowercase, numeric, repeat, symbol, uppercase
Password History Length 0 5
Password Chars Changed 0 8
Password Percent Changed 0 50
Password Hash Type NTHash SHA512
Max Inactivity Days 0 35

For example:

# uname -or
Isilon OneFS 9.5.0.0

# isi hardening list
Name  Description                       Status
---------------------------------------------------
STIG  Enable all STIG security settings Applied
---------------------------------------------------
Total: 1

# isi auth local view system
                    Name: System
                  Status: active
          Authentication: Yes
   Create Home Directory: Yes
 Home Directory Template: /ifs/home/%U
        Lockout Duration: Now
       Lockout Threshold: 3
          Lockout Window: 15m
             Login Shell: /bin/zsh
            Machine Name:
        Min Password Age: 1D
        Max Password Age: 8W4D
     Min Password Length: 15
    Password Prompt Time: 2W
     Password Complexity: lowercase, numeric, repeat, symbol, uppercase
 Password History Length: 5
  Password Chars Changed: 8
Password Percent Changed: 50
      Password Hash Type: SHA512
     Max Inactivity Days: 35

Note that the ‘Password Hash Type’ is changed from the default ‘NTHash’ to the more secure ‘SHA512’ encoding, in addition to setting the various password criteria.

The OneFS 9.5 WebUI also sees several additions and alterations to the Password Policy page. These include:

Operation OneFS 9.5 Change Details
Policy page Added New Password policy page under Access -> Membership and Roles
reset-password Added Generate a random password that meets current password policy for user to login

The most obvious change is the transfer of the policy configuration elements from the local provider page to a new dedicated Password Policy page.

Here’s the OneFS 9.4 ‘View a local provider’ page, under Access > Authentication providers > Local providers > System:

The above is replaced and augmented in the OneFS 9.5 WebUI with the following page, located under Access > Membership and Roles > Password Policy:

New password policy configuration options are included to require upper-case, lower-case, numeric, or special characters and limit the number of contiguous repeats of a character, etc.

When it comes to changing a password, only a permitted user can make their change. This can be performed from a couple of locations in the WebUI. First, the user options on the task bar at the top of each screen now provides a ‘Change password’ option:

A pop-up warning message will also be displayed by the WebUI, informing when password expiration is imminent. This warning provides a ‘Change Password’ link:

Clicking on the above link displays the following page:

A new password complexity tool-tip message is also displayed, informing the user of safe password selection.

Note that re-login is required after a password change.

On the ‘Users’ page under Access > Membership and roles > Users, the ‘Action’ drop-down list on the ‘Users’ page now also contains a ‘Reset Password’ option:

The successful reset confirmation pop-up offers both a ‘show’ and ‘copy’ option, while informing the cluster administrator to share the new password with the user, and for them to change their password during their next login:

The ‘Create user’ page now provides an additional field that requires password confirmation. Additionally, the password complexity tool-tip message is also displayed:

The redesigned ‘Edit user details’ page no longer provides a field to edit the password directly:

Instead, the ‘Action’ drop-down list on the ‘Users’ page now contains a ‘Reset Password’ option.

OneFS Key Manager Rekey Support

The OneFS key manager is a backend service which orchestrates the storage of sensitive information for PowerScale clusters. In order to satisfy Dell’s Secure Infrastructure Ready requirements and other public and private sector security mandates, the manager provides the ability to replace, or rekey, cryptographic keys.

The quintessential consumer of OneFS key management is data-at-rest encryption (DARE). Protecting sensitive data stored on the cluster with cryptography ensures that it’s guarded against theft, in the event that drives or nodes are removed from a PowerScale cluster. DARE is a requirement for federal and industry regulations ensuring data is encrypted when it is stored. OneFS has provided DARE solutions for many years through secure encrypted drives (SED) and the OneFS key management system.

A 256-bit Master Key (MK) encrypts the Key Manager Database (KMDB) for SED and cluster domains. In OneFS 9.2 and later, the MK for SEDs can either be stored off-cluster on a KMIP server, or locally on a node (the legacy behavior).

However, there are a variety of other consumers of the OneFS key manager, in addition to DARE. These include services and protocols such as:

Service Description
CELOG Cluster event log.
CloudPools Cluster tier to cloud service.
Email Electronic mail.
FTP File transfer protocol.
IPMI Intelligent platform management interface for remote cluster console access.
JWT JSON web tokens.
NDMP Network data management protocol for cluster backups and DR.
Pstore Active directory and Kerberos password store.
S3 S3 object protocol.
SyncIQ Cluster replication service.
SmartSync OneFS push and pull replication cluster and cloud replication service.
SNMP Simple network monitoring protocol.
SRS Old Dell support remote cluster connectivity.
SSO Single sign-on.
SupportAssist Remote cluster connectivity to Dell Support.

OneFS 9.5 introduces a number of enhancements to the venerable key manager, including:

  • The ability to rekey keystores. Rekey operation will generate a new MK and re-encrypt all entries stored with the new key.
  • New CLI commands and WebUI options to perform a rekey operation or schedule key rotation on a time interval.
  • New commands to monitor the progress and status of a rekey operation.

As such, OneFS 9.5 now provides the ability to rekey the MK, irrespective of where it is stored.

Note that, when upgrading from an earlier OneFS release, the new rekey functionality is only available once the OneFS 9.5 upgrade has been committed.

Under the hood, each provider store in the key manager consists of secure backend storage and an MK. Entries are kept in a Sqlite database or key-value store. A provider datastore uses its MK to encrypt all its entries within the store.

During the rekey process, the old MK is only deleted after a successful re-encryption with the new MK. If, for any reason, the process fails, the old MK is available and remains as the current MK. The rekey daemon retries the rekey every 15 minutes if the process fails.

The OneFS rekey process is as follows:

  1. A new master key (MK) is generated, and internal configuration is updated.
  2. Any entries in the provider store are decrypted and encrypted with the new MK.
  3. If the prior steps are successful, the previous MK is deleted

To support the rekey process, the MK in OneFS 9.5 now has an ID associated with it. All entries have a new field referencing the master key ID.

During the rekey operation, there are two MK values with different IDs, and all entries in the database will associate which key they are encrypted by.

In OneFS 9.5, the rekey configuration and management is split between the cluster keys and the SED keys:

Rekey Component Detail
SED ·         SED provider keystore is stored locally on each node.

·         SED provider domain already had an existing CLI commands for handling KMIP settings in prior releases.

Cluster ·         Controls all cluster-wide keystore domains.

·         Status shows information of all cluster provider domains.

SED keys rekey

The SED key manager rekey operation can be managed through a DARE cluster’s CLI or WebUI, and can either be automatically scheduled or run manually on-demand. The following CLI syntax can be used to manually initiate a rekey:

# isi keymanager sed rekey start

Alternatively, to schedule a rekey operation. For example, to schedule a key rotation every two months:

# isi keymanager sed rekey modify --key-rotation 2M

The key manager status for SEDs can be viewed as follows:

# isi keymanager sed status

 Node Status  Location  Remote Key ID  Key Creation Date   Error Info(if any)

-----------------------------------------------------------------------------

1   LOCAL   Local                    1970-01-01T00:00:00

-----------------------------------------------------------------------------

Total: 1

Alternatively, from the WebUI, navigate to Access > Key Management >  SED/Cluster Rekey and check the ‘Automatic rekey for SED keys’ and configure the rekey frequency:

Note that for SED rekey operations, if a migration from local cluster key management to a KMIP server is in progress, the rekey process will begin once the migration has completed.

Cluster keys rekey

As mentioned previously, OneFS 9.5 also supports the rekey of cluster keystore domains. This cluster rekey operation is available through the CLI and the WebUI and may either be scheduled or run on-demand. The available cluster domains can be queried by running the following CLI syntax:

# isi keymanager cluster status

Domain     Status  Key Creation Date   Error Info(if any)

----------------------------------------------------------

CELOG      ACTIVE  2023-04-06T09:19:16

CERTSTORE  ACTIVE  2023-04-06T09:19:16

CLOUDPOOLS ACTIVE  2023-04-06T09:19:16

EMAIL      ACTIVE  2023-04-06T09:19:16

FTP        ACTIVE  2023-04-06T09:19:16

IPMI_MGMT  IN_PROGRESS  2023-04-06T09:19:16

JWT        ACTIVE  2023-04-06T09:19:16

LHOTSE     ACTIVE  2023-04-06T09:19:11

NDMP       ACTIVE  2023-04-06T09:19:16

NETWORK    ACTIVE  2023-04-06T09:19:16

PSTORE     ACTIVE  2023-04-06T09:19:16

RICE       ACTIVE  2023-04-06T09:19:16

S3         ACTIVE  2023-04-06T09:19:16

SIQ        ACTIVE  2023-04-06T09:19:16

SNMP       ACTIVE  2023-04-06T09:19:16

SRS        ACTIVE  2023-04-06T09:19:16

SSO        ACTIVE  2023-04-06T09:19:16

----------------------------------------------------------

Total: 17

The rekey process generates a new key and re-encrypts the entries for the domain. The old key is then deleted.

Performance-wise, the rekey process does consume cluster resources (CPU and disk) as a result of the re-encryption phase which is fairly write-intensive. As such, a good practice is to perform rekey operations outside of core business hours, or during scheduled cluster maintenance windows.

During the rekey process, the old MK is only deleted oncer a successful re-encryption with the new MK has been confirmed. In the event of a rekey process failure, the old MK is available and remains as the current MK.

A rekey may be requested immediately or may be scheduled with a cadence. The rekey operation is available through the CLI and the WebUI. In the WebUI, navigate to Access > Key Management > SED/Cluster Rekey.

To start a rekey of the cluster domains immediately, from the CLI run the following syntax:

# isi keymanager cluster rekey start
Are you sure you want to rekey the master passphrase? (yes/[no]):yes

Alternatively, from the WebUI, navigate to Access under the SED/Cluster Rekey tab, select Rekey Now button, next to Cluster Keys:

A scheduled rekey of the cluster keys (excluding the SED keys) can be configured from the CLI with the following syntax:

# isi keymanager cluster rekey modify --key-rotation [YMWDhms].

Specify the frequency of the ‘key rotation’ field as an integer, using Y for years, M for months, W for weeks, D for days, h for hours, m for minutes, and s for seconds. For example, the following command will schedule the cluster rekey operation to execute every six weeks:

# isi keymanager cluster rekey view
Rekey Time: 1970-01-01T00:00:00
Key Rotation: Never
# isi keymanager cluster rekey modify --key-rotation 6W
# isi keymanager cluster rekey view
Rekey Time: 2023-04-28T18:38:45
Key Rotation: 6W

The rekey configuration can be easily reverted back to on-demand from a schedule as follows:

# isi keymanager cluster rekey modify --key-rotation Never
# isi keymanager cluster rekey view
Rekey Time: 2023-04-28T18:38:45
Key Rotation: Never

Alternatively, from the WebUI, under the SED/Cluster Rekey tab, select the Automatic rekey for Cluster keys checkbox and specify the rekey frequency. For example:

In an event of a rekeying failure a CELOG ‘KeyManagerRekeyFailed’ or ‘KeyManagerSedsRekeyFailed’ event is created. Since SED rekey is a node-local operation, the ‘KeyManagerSedsRekeyFailed’ event information will also include which node experienced the failure.

Additionally, current cluster rekey status can also be queried with the following CLI command:

# isi keymanager cluster status

Domain     Status  Key Creation Date   Error Info(if any)

----------------------------------------------------------

CELOG      ACTIVE  2023-04-06T09:19:16

CERTSTORE  ACTIVE  2023-04-06T09:19:16

CLOUDPOOLS ACTIVE  2023-04-06T09:19:16

EMAIL      ACTIVE  2023-04-06T09:19:16

FTP        ACTIVE  2023-04-06T09:19:16

IPMI_MGMT  ACTIVE  2023-04-06T09:19:16

JWT        ACTIVE  2023-04-06T09:19:16

LHOTSE     ACTIVE  2023-04-06T09:19:11

NDMP       ACTIVE  2023-04-06T09:19:16

NETWORK    ACTIVE  2023-04-06T09:19:16

PSTORE     ACTIVE  2023-04-06T09:19:16

RICE       ACTIVE  2023-04-06T09:19:16

S3         ACTIVE  2023-04-06T09:19:16

SIQ        ACTIVE  2023-04-06T09:19:16

SNMP       ACTIVE  2023-04-06T09:19:16

SRS        ACTIVE  2023-04-06T09:19:16

SSO        ACTIVE  2023-04-06T09:19:16

----------------------------------------------------------

Total: 17

Or, for SEDs rekey status:

# isi keymanager sed status

 Node Status  Location  Remote Key ID  Key Creation Date   Error Info(if any)

-----------------------------------------------------------------------------

1   LOCAL   Local                    1970-01-01T00:00:00

2   LOCAL   Local                    1970-01-01T00:00:00

3   LOCAL   Local                    1970-01-01T00:00:00

4   LOCAL   Local                    1970-01-01T00:00:00

-----------------------------------------------------------------------------

Total: 4

The rekey process also outputs to the /var/log/isi_km_d.log file, which is a useful source for additional troubleshooting.

If an error in rekey occurs, the previous MK is not deleted, so entries in the provider store can still be created and read as normal. The key manager daemon will re-attempt rekey operation in the background every fifteen minutes until it succeeds.

OneFS Restricted Shell – Log Viewing and Recovery

Complementary to the restricted shell itself, which was covered in the previous article in this series, OneFS 9.5 also sees the addition of a new log viewer, plus a recovery shell option.

The new isi_log_access CLI utility enables a secure shell user to read, page, and query the logfiles in the /var/log directory. The ability to run this tool is governed by user’s role being granted the ‘ISI_PRIV_SYS_SUPPORT’ role-based access control (RBAC) privilege.

OneFS RBAC is used to explicitly limit who has access to the range of cluster configurations and operations. This granular control allows administrative roles to be crafted which can create and manage the various OneFS core components and data services, isolating each to specific security roles or to admin only, etc.

In this case, a cluster security administrator selects the desired access zone, creates a zone-aware role within it, assigns the ‘ISI_PRIV_SYS_SUPPORT’ privileges, for isi_log_access use, and then assigns users to the role.

Note that the built-in OneFS ‘AuditAdmin’ RBAC role does not contains the ‘ISI_PRIV_SYS_SUPPORT’ by default. Also, the built-in RBAC roles cannot be reconfigured:

# isi auth roles modify AuditAdmin --add-priv=ISI_PRIV_SYS_SUPPORT

The privileges of built-in role AuditAdmin cannot be modified

Therefore, the ‘ISI_PRIV_SYS_SUPPORT’ role will need to be added to a custom role.

For example, the following CLI syntax will add the user ‘usr_admin_restricted’ to the ‘rl_ssh’ role, and add the privilege ‘ISI_PRIV_SYS_SUPPORT’ to the ‘rl_ssh’ role:

# isi auth roles modify rl_ssh --add-user=usr_admin_restricted

# isi auth roles modify rl_ssh --add-priv=ISI_PRIV_SYS_SUPPORT

# isi auth roles view rl_ssh
       Name: rl_ssh
Description: -
    Members: u_ssh_restricted
             u_admin_restricted
 Privileges
             ID: ISI_PRIV_LOGIN_SSH
     Permission: r

             ID: ISI_PRIV_SYS_SUPPORT
     Permission: r

The ‘usr_admin_restricted’ user can also be added to the ‘AuditAdmin’ role, if desired:

# isi auth roles modify AuditAdmin --add-user=usr_admin_restricted

# isi auth roles view AuditAdmin | grep -i member
    Members: usr_admin_restricted

So the isi_log_access tool itself supports the following command options and arguments:

Option Description
–grep Match a pattern against the file and display on stdout.
–help Displays the command description and usage message.
–list List all the files in the /var/log tree.
–less Display the file on stdout with a pager in secure_mode.
–more Display the file on stdout with a pager in secure_mode.
–view Display the file on stdout.
–watch Display the end of the file and new content as it is written.
–zgrep Match a pattern against the unzipped file contents and display on stdout.
–zview Display an unzipped version of the file on stdout.

 

Here the ‘u_admin_restricted’ user logs in to the secure shell and runs the isi_log_access utility to list the /var/log/messages logfile:

# ssh u_admin_restricted@10.246.178.121
(u_admin_restricted@10.246.178.121) Password:
Last login: Wed May  3 18:02:18 2023 from 10.246.159.107
Copyright (c) 2001-2023 Dell Inc. or its subsidiaries. All Rights Reserved.
Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.

PowerScale OneFS 9.5.0.0

Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout
# isi_log_access --list
LAST MODIFICATION TIME         SIZE       FILE
Mon Apr 10 14:22:18 2023       56         alert.log
Fri May  5 00:30:00 2023       62         all.log
Fri May  5 00:30:00 2023       99         all.log.0.gz
Fri May  5 00:00:00 2023       106        all.log.1.gz
Thu May  4 00:30:00 2023       100        all.log.2.gz
Thu May  4 00:00:00 2023       107        all.log.3.gz
Wed May  3 00:30:00 2023       99         all.log.4.gz
Wed May  3 00:00:00 2023       107        all.log.5.gz
Tue May  2 00:30:00 2023       100        all.log.6.gz
Mon Apr 10 14:22:18 2023       56         audit_config.log
Mon Apr 10 14:22:18 2023       56         audit_protocol.log
Fri May  5 17:23:53 2023       82064      auth.log
Sat Apr 22 12:09:31 2023       10750      auth.log.0.gz
Mon Apr 10 15:31:36 2023       0          bam.log
Mon Apr 10 14:22:18 2023       56         boxend.log
Mon Apr 10 14:22:18 2023       56         bwt.log
Mon Apr 10 14:22:18 2023       56         cloud_interface.log
Mon Apr 10 14:22:18 2023       56         console.log
Fri May  5 18:20:32 2023       23769      cron
Fri May  5 15:30:00 2023       8803       cron.0.gz
Fri May  5 03:10:00 2023       9013       cron.1.gz
Thu May  4 15:00:00 2023       8847       cron.2.gz
Fri May  5 03:01:02 2023       3012       daily.log
Fri May  5 00:30:00 2023       101        daily.log.0.gz
Fri May  5 00:00:00 2023       1201       daily.log.1.gz
Thu May  4 00:30:00 2023       102        daily.log.2.gz
Thu May  4 00:00:00 2023       1637       daily.log.3.gz
Wed May  3 00:30:00 2023       101        daily.log.4.gz
Wed May  3 00:00:00 2023       1200       daily.log.5.gz
Tue May  2 00:30:00 2023       102        daily.log.6.gz
Mon Apr 10 14:22:18 2023       56         debug.log
Tue Apr 11 12:29:37 2023       3694       diskpools.log
Fri May  5 03:01:00 2023       244566     dmesg.today
Thu May  4 03:01:00 2023       244662     dmesg.yesterday
Tue Apr 11 11:49:32 2023       788        drive_purposing.log
Mon Apr 10 14:22:18 2023       56         ethmixer.log
Mon Apr 10 14:22:18 2023       56         gssd.log
Fri May  5 00:00:35 2023       41641      hardening.log
Mon Apr 10 15:31:05 2023       17996      hardening_engine.log
Mon Apr 10 14:22:18 2023       56         hdfs.log
Fri May  5 15:51:28 2023       31359      hw_ata.log
Fri May  5 15:51:28 2023       56527      hw_da.log
Mon Apr 10 14:22:18 2023       56         hw_nvd.log
Mon Apr 10 14:22:18 2023       56         idi.log

In addition to parsing an entire logfile with the ‘more’ and ‘less’ flags , the isi_log_access utility can also be used to watch (ie. ‘tail’) a log. For example, the /var/log/messages logfile:

% isi_log_access --watch messages
2023-05-03T18:00:12.233916-04:00 <1.5> h7001-2(id2) limited[68236]: Called ['/usr/bin/isi_log_access', 'messages'], which returned 2.
2023-05-03T18:00:23.759198-04:00 <1.5> h7001-2(id2) limited[68236]: Calling ['/usr/bin/isi_log_access'].
2023-05-03T18:00:23.797928-04:00 <1.5> h7001-2(id2) limited[68236]: Called ['/usr/bin/isi_log_access'], which returned 0.
2023-05-03T18:00:36.077093-04:00 <1.5> h7001-2(id2) limited[68236]: Calling ['/usr/bin/isi_log_access', '--help'].
2023-05-03T18:00:36.119688-04:00 <1.5> h7001-2(id2) limited[68236]: Called ['/usr/bin/isi_log_access', '--help'], which returned 0.
2023-05-03T18:02:14.545070-04:00 <1.5> h7001-2(id2) limited[68236]: Command not in list of allowed commands.
2023-05-03T18:02:50.384665-04:00 <1.5> h7001-2(id2) limited[68594]: Calling ['/usr/bin/isi_log_access', '--list'].
2023-05-03T18:02:50.440518-04:00 <1.5> h7001-2(id2) limited[68594]: Called ['/usr/bin/isi_log_access', '--list'], which returned 0.
2023-05-03T18:03:13.362411-04:00 <1.5> h7001-2(id2) limited[68594]: Command not in list of allowed commands.
2023-05-03T18:03:52.107538-04:00 <1.5> h7001-2(id2) limited[68738]: Calling ['/usr/bin/isi_log_access', '--watch', 'messages'].

As expected, the last few lines of the messages logfile are displayed. These log entries include the command audit entries for the ‘usr_admin_secure’ user running the ‘isi_log_access’ utility with both the ‘—-help’, ‘–list’, and ‘—-watch’ arguments.

The ‘isi_log_access’ utility also allows zipped logfiles to be read (–zview) or searched (–zgrep) without uncompressing them. For example, to find all the ‘usr_admin’ entries in the zipped vmlog.0.gz file:

# isi_log_access --zgrep usr_admin vmlog.0.gz
   0.0 64468 usr_admin_restricted /usr/local/bin/zsh
   0.0 64346 usr_admin_restricted python /usr/local/restricted_shell/bin/restricted_shell.py (python3.8)
   0.0 64468 usr_admin_restricted /usr/local/bin/zsh
   0.0 64346 usr_admin_restricted python /usr/local/restricted_shell/bin/restricted_shell.py (python3.8)
   0.0 64342 usr_admin_restricted sshd: usr_admin_restricted@pts/3 (sshd)
   0.0 64331 root               sshd: usr_admin_restricted [priv] (sshd)
   0.0 64468 usr_admin_restricted /usr/local/bin/zsh
   0.0 64346 usr_admin_restricted python /usr/local/restricted_shell/bin/restricted_shell.py (python3.8)
   0.0 64342 usr_admin_restricted sshd: usr_admin_restricted@pts/3 (sshd)
   0.0 64331 root               sshd: usr_admin_restricted [priv] (sshd)
   0.0 64468 usr_admin_restricted /usr/local/bin/zsh
   0.0 64346 usr_admin_restricted python /usr/local/restricted_shell/bin/restricted_shell.py (python3.8)
   0.0 64342 usr_admin_restricted sshd: usr_admin_restricted@pts/3 (sshd)
   0.0 64331 root               sshd: usr_admin_restricted [priv] (sshd)
   0.0 64468 usr_admin_restricted /usr/local/bin/zsh
   0.0 64346 usr_admin_restricted python /usr/local/restricted_shell/bin/restricted_shell.py (python3.8)
   0.0 64342 usr_admin_restricted sshd: u_admin_restricted@pts/3 (sshd)
   0.0 64331 root               sshd: usr_admin_restricted [priv] (sshd)

OneFS Recovery shell

The purpose of the recovery shell to allow a restricted shell user to access a regular UNIX shell, and its associated command set, if needed. As such, the recovery shell is primarily designed and intended for reactive cluster recovery operations, and other unforeseen support issues. Note that the ‘isi_recovery_shell’ CLI command can only be run, and the recovery shell entered, from within the restricted shell.

The ‘ISI_PRIV_RECOVERY_SHELL’ privilege is required in order for a user to elevate their shell from restricted to recovery. The following syntax can be used to add this privilege to a role, in this case the ‘rl_ssh’ role:

% isi auth roles modify rl_ssh --add-priv=ISI_PRIV_RECOVERY_SHELL

% isi auth roles view rl_ssh
       Name: rl_ssh
Description: -
    Members: usr_ssh_restricted
             usr_admin_restricted
 Privileges
             ID: ISI_PRIV_LOGIN_SSH
     Permission: r

             ID: ISI_PRIV_SYS_SUPPORT
     Permission: r

             ID: ISI_PRIV_RECOVERY_SHELL
     Permission: r

However, note that the ‘–-restricted-shell-enabled’ security parameter must be set to ‘true’ before a user with the ISI_PRIV_RECOVERY_SHELL privilege can actually enter the recovery shell. For example:

% isi security settings view | grep -i restr

Restricted shell Enabled: No

% isi security settings modify –restricted-shell-enabled=true

% isi security settings view | grep -i restr

Restricted shell Enabled: Yes

The restricted shell user will need to enter the cluster’s root password in order to successfully enter the recovery shell. For example:

% isi_recovery_shell -h
Description:
        This command is used to enter the Recovery shell i.e. normal zsh shell from the PowerScale Restricted shell. This command is supported only in the PowerScale Restricted shell.

Required Privilege:
        ISI_PRIV_RECOVERY_SHELL

Usage:
        isi_recovery_shell
           [{--help | -h}]

If root password is entered incorrectly, the following error will be displayed:

% isi_recovery_shell
Enter 'root' credentials to enter the Recovery shell
Password:
Invalid credentials.
isi_recovery_shell: PAM Auth Failed

A successful recovery shell launch is as follows:

$ ssh u_admin_restricted@10.246.178.121
(u_admin_restricted@10.246.178.121) Password:
Last login: Thu May  4 17:26:10 2023 from 10.246.159.107
Copyright (c) 2001-2023 Dell Inc. or its subsidiaries. All Rights Reserved.
Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.

PowerScale OneFS 9.5.0.0

Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout

% isi_recovery_shell
Enter 'root' credentials to enter the Recovery shell
Password:
%

At this point, regular shell/UNIX commands (including the ‘vi’ editor) are available again:

% whoami
u_admin_restricted

% pwd
/ifs/home/u_admin_restricted
% top | head -n 10
last pid: 65044;  load averages:  0.12,  0.24,  0.29  up 24+04:17:23    18:38:39
118 processes: 1 running, 117 sleeping
CPU:  0.1% user,  0.0% nice,  0.9% system,  0.1% interrupt, 98.9% idle
Mem: 233M Active, 19G Inact, 2152K Laundry, 137G Wired, 60G Buf, 13G Free
Swap:
  PID USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
 3955 root          1 -22  r30    50M    14M select  24 142:28   0.54% isi_drive_d
 5715 root         20  20    0   231M    69M kqread   5  55:53   0.15% isi_stats_d
 3864 root         14  20    0    81M    21M kqread  16 133:02   0.10% isi_mcp

The specifics of the recovery shell (ZSH) for the u_admin_restricted user are reported as follows:

% printenv $SHELL
_=/usr/bin/printenv
PAGER=less
SAVEHIST=2000
HISTFILE=/ifs/home/u_admin_restricted/.zsh_history
HISTSIZE=1000
OLDPWD=/ifs/home/u_admin_restricted
PWD=/ifs/home/u_admin_restricted
SHLVL=1
LOGNAME=u_admin_restricted
HOME=/ifs/home/u_admin_restricted
RECOVERY_SHELL=TRUE
TERM=xterm
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/root/bin

Shell logic conditions and scripts can be run. For example:

% while true; do uptime; sleep 5; done
 5:47PM  up 24 days,  3:26, 5 users, load averages: 0.44, 0.38, 0.34
 5:47PM  up 24 days,  3:26, 5 users, load averages: 0.41, 0.38, 0.34

ISI commands can be run and cluster management tasks performed.

% isi hardening list
Name  Description                       Status
---------------------------------------------------
STIG  Enable all STIG security settings Not Applied
---------------------------------------------------
Total: 1

For example, creating and deleting a snapshot:

% isi snap snap list
ID Name Path
------------
------------
Total: 0


% isi snap snap create /ifs/data

% isi snap snap list
ID   Name  Path
--------------------
2    s2    /ifs/data
--------------------
Total: 1

% isi snap snap delete 2
Are you sure? (yes/[no]): yes

Sysctls can be read and managed:

% sysctl efs.gmp.group

efs.gmp.group: <10539754> (4) :{ 1:0-14, 2:0-12,14,17, 3-4:0-14, smb: 1-4, nfs: 1-4, all_enabled_protocols: 1-4, isi_cbind_d: 1-4, lsass: 1-4, external_connectivity: 1-4 }

The restricted shell can be disabled:

% isi security settings modify --restricted-shell-enabled=false

% isi security settings view | grep -i restr
Restricted shell Enabled: No

However, the ‘isi underscore’ (isi_*) commands, such as isi_for_array, are still not permitted to run:

% /usr/bin/isi_for_array -s uptime
zsh: permission denied: /usr/bin/isi_for_array

% isi_gather_info
zsh: permission denied: isi_gather_info

% isi_cstats
isi_cstats: Syscall ifs_prefetch_lin() failed: Operation not permitted

When finished, the user can either end the session entirely with the ‘logout’ command, or quit the recovery shell via ‘exit’ and return to the restricted shell:

% exit

Allowed commands are

        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout
%

OneFS Restricted Shell

In contrast to many other storage appliances, PowerScale has always enjoyed an extensive, rich and capable command line, drawing from its FreeBSD heritage. As such, it incorporates a choice of full UNIX shells (ie. ZSH), the ability to script in a variety of languages (perl, python, etc), full data access, a variety of system and network management and monitoring tools, plus the comprehensive OneFS ‘isi’ command set. However, what is a bonus for usability can, on the flip side, also present a risk from a security point of view.

With this in mind, amongst the bevy of security features that debut in OneFS 9.5 release is the addition of a restricted shell for the CLI. This shell heavily curtails access to cluster command line utilities, eliminating areas where commands and scripts could be run and files modified maliciously and unaudited.

The new restricted shell can help both public and private sector organizations to meet a variety of regulatory compliance and audit requirements, in addition to reducing the security threat surface when administering OneFS.

Written in python, the restricted shell constrains users to a tight subset of the commands available in the regular OneFS command line shells, plus a couple of additional utilities. These include:

CLI Utility Description
ISI commands The ‘isi’ or ‘isi space’ commands. These include the commands such as ‘isi status’, etc. The full set of isi commands can are listed by ‘isi –help’
Shell commands The supported shell commands include ‘clear’, ‘exit’, ‘logout’, and ‘CTRL+D’.
Log access The ‘isi_log_access’ tool can be used if the user possesses the ISI_PRIV_SYS_SUPPORT privilege.
Recovery shell The recovery shell ‘isi_recovery_shell’ can be used if the users possesses the ISI_PRIV_RECOVERY_SHELL, and the security setting ‘Restricted shell Enabled’ is configured to ‘true’.

In order for a OneFS CLI command to be audited, its handler needs to call through the platform API (pAPI). This occurs with the regular ‘isi’ commands, but not necessarily with the ‘isi underscore’ commands, such as ‘isi_for_array’, etc. While some of these ‘isi_*’ commands write to log files, there is no uniform or consistent auditing or logging.

On the data access side, /ifs file system auditing works through the various OneFS protocol heads (NFS, SMB, S3, etc). So if the CLI is used with an unrestricted shell to directly access and modify /ifs, any access and changes are unrecorded and unaudited.

In OneFS 9.5, the new restricted shell is included in the permitted shells list (/etc/shells):

# grep -i restr /etc/shells

/usr/local/restricted_shell/bin/restricted_shell.py

It can be easily set for a user via the CLI. For example, to configure the ‘admin’ account to use the restricted shell, instead of its default of ZSH:

# isi auth users view admin | grep -i shell

                   Shell: /usr/local/bin/zsh

# isi auth users modify admin --shell=/usr/local/restricted_shell/bin/restricted_shell.py

# isi auth users view admin | grep -i shell

                   Shell: /usr/local/restricted_shell/bin/restricted_shell.py

OneFS can also be configured to limit non-root users to just the secure shell, too:

# isi security settings view | grep -i restr

  Restricted shell Enabled: No

# isi security settings modify --restricted-shell-enabled=true

# isi security settings view | grep -i restr

  Restricted shell Enabled: Yes

The underlying configuration changes to support this include only allowing non-root users with approved shells in /etc/shells to login via console or ssh and having just /usr/local/restricted_shell/bin/restricted_shell.py in the /etc/shells config file.

Note that no users’ shells are changed when the configuration commands above are enacted. If users are intended to have shell access, their login shell will need to be changed prior to them being able to login. Users will also require the privileges ‘ISI_PRIV_LOGIN_SSH’ and/or ‘ISI_PRIV_LOGIN_CONSOLE’ to be able to log in via SSH and the console respectively.

While the WebUI in OneFS 9.5 does not provide a secure shell configuration page, the restricted shell can be enabled from the platform API, in addition to the CLI. The pAPI security settings now include a ‘restricted_shell_enabled’ key which can be enabled by setting to value=1, from its default of ‘0’.

Be aware that, on configuring a OneFS 9.5 cluster to run in hardened mode with the STIG profile (ie. ‘isi hardening enable STIG’), the ‘restricted-shell-enable’ security setting is automatically set to ‘true’. This means that only root and users with ‘ISI_PRIV_LOGIN_SSH’ and/or ‘ISI_PRIV_LOGIN_CONSOLE’ privileges and the restricted shell as their shell will be permitted to login to the cluster. We will focus on OneFS security hardening in a future article.

So let’s take a look at some examples of the restricted shell’s configuration and operation. But note that a cluster’s default user ‘admin’ uses role-based access control (RBAC), whereas ‘root’ does not. As such, the ‘root’ account should ideally be as infrequently as possible, and ideally be considered solely as the account of last resort.

First, we log in as the ‘admin’ user and modify the ‘file’ and ‘local’ auth provider password hash types to the more secure ‘SHA512’ from their default value of ‘NTHash’:

# ssh 10.244.34.34 -l admin

# isi auth file view System | grep -i hash

     Password Hash Type: NTHash

# isi auth local view System | grep -i hash

      Password Hash Type: NTHash

# isi auth file modify System –-password-hash-type=SHA512

# isi auth local modify System –-password-hash-type=SHA512

Note that a cluster’s default user ‘admin’ uses role-based access control (RBAC), whereas ‘root’ does not. As such, the ‘root’ account should ideally be as infrequently as possible, and ideally be considered solely as the account of last resort.

Next, the ‘admin’ and ‘root’ passwords are changed in order to generate new passwords using the SHA512 hash:

# isi auth users change-password root

# isi auth users change-password admin

An ‘rl_ssh’ role is created and the SSH access privilege is added to it:

# isi auth roles create rl_ssh

# isi auth roles modify rl_ssh –-add-priv=ISI_PRIV_LOGIN_SSH

Then a regular user (usr_ssh_restricted) and an admin user (usr_admin_resticted) are created with restricted shell privileges:

# isi auth users create usr_ssh_restricted –-shell=/usr/local/restricted_shell/bin/restricted_shell.py –-set-password

# isi auth users create usr_admin_restricted –shell=/usr/local/restricted_shell/bin/restricted_shell.py –-set-password

We then assign the desired roles to the new users. For the restricted SSH user, we add to our newly created ‘rl_ssh’ role:

# isi auth roles modify rl_ssh –-add-user=usr_ssh_restricted

The admin user is then added to the security admin and the system admin roles:

# isi auth roles modify SecurityAdmin –-add-user=usr_admin_restricted

# isi auth roles modify SystemAdmin –-add-user=usr_admin_restricted

Next, we connect to cluster via SSH and authenticate as the ‘usr_ssh_restricted’ user:

$ ssh usr_ssh_restricted@10.246.178.121
(usr_ssh_restricted@10.246.178.121) Password:
Copyright (c) 2001-2023 Dell Inc. or its subsidiaries. All Rights Reserved.
Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
PowerScale OneFS 9.5.0.0

Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout
%

This account has no cluster RBAC privileges beyond SSH access, so cannot run the various ‘isi’ commands. For example, attempting to run ‘isi status’ returns no data, instead warning of the need for event, job engine, and statistics privileges:

% isi status
Cluster Name: h7001
 
*** Capacity and health information require ***
***   the privilege: ISI_PRIV_STATISTICS.   ***

Critical Events:
*** Requires the privilege: ISI_PRIV_EVENT. ***

Cluster Job Status:
 
*** Requires the privilege: ISI_PRIV_JOB_ENGINE. ***

Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout
%

Similarly, standard UNIX shell commands, such as ‘pwd’ and ‘whoami’ are also prohibited:

% pwd
Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout
% whoami
Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout

Indeed, without additional OneFS RBAC privileges, the only commands the ‘usr_ssh_restricted’ user can actually run in the restricted shell are ‘clear’, ‘exit’, and ‘logout’:

Note that the restricted shell automatically logs out an inactive session after a short period of inactivity.

Next, we log in in with the ‘usr_admin_restricted’ account:

$ ssh usr_admin_restricted@10.246.178.121
(usr_admin_restricted@10.246.178.121) Password:
Copyright (c) 2001-2023 Dell Inc. or its subsidiaries. All Rights Reserved.
Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.

PowerScale OneFS 9.5.0.0

Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout
%

The ‘isi’ commands now work, since the user has the ‘SecurityAdmin’ and ‘SystemAdmin’ roles and privileges:

% isi auth roles list
Name
---------------
AuditAdmin
BackupAdmin
BasicUserRole
SecurityAdmin
StatisticsAdmin
SystemAdmin
VMwareAdmin
rl_console
rl_ssh
---------------
Total: 9
Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout

% isi auth users view usr_admin_restricted
                    Name: usr_admin_restricted
                      DN: CN=usr_admin_restricted,CN=Users,DC=H7001
              DNS Domain: -
                  Domain: H7001
                Provider: lsa-local-provider:System
        Sam Account Name: usr_admin_restricted
                     UID: 2003
                     SID: S-1-5-21-3745626141-289409179-1286507423-1003
                 Enabled: Yes
                 Expired: No
                  Expiry: -
                  Locked: No
                   Email: -
                   GECOS: -
           Generated GID: No
           Generated UID: No
           Generated UPN: Yes
           Primary Group
                          ID: GID:1800
                        Name: Isilon Users
          Home Directory: /ifs/home/usr_admin_restricted
        Max Password Age: 4W
        Password Expired: No
         Password Expiry: 2023-05-30T17:16:53
       Password Last Set: 2023-05-02T17:16:53
        Password Expires: Yes
              Last Logon: -
                   Shell: /usr/local/restricted_shell/bin/restricted_shell.py
                     UPN: usr_admin_restricted@H7001
User Can Change Password: Yes
   Disable When Inactive: No
Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout
%

However, the OneFS ‘isi underscore’ commands are not supported under the restricted shell. For example, attempting to use the ‘isi_for_array’ command:

% isi_for_array -s uname -a
Allowed commands are
        clear ...
        isi ...
        isi_recovery_shell ...
        isi_log_access ...
        exit
        logout

Note that, by default, the ‘SecurityAdmin’ and SystemAdmin’ roles do not grant the ‘usr_admin_restricted’ user the privileges needed to execute the new ‘isi_log_access’ and ‘isi_recovery_shell’ commands.

In the next article in this series, we’ll take a look at these associated ‘isi_log_access’ and ‘isi_recovery_shell’ utilities that are also introduced in OneFS 9.5.

PowerScale OneFS 9.6 and APEX File Storage for AWS

Dropping in time for its unveiling at Dell Technologies World 2023, the new PowerScale OneFS 9.6 release is a cloud-only version, supporting the freshly launched APEX File Storage for AWS solution.

OneFS 9.6 delivers Dell’s first software defined unstructured data solution, simplifying the journey to the cloud with seamless data mobility, operational consistency between on-prem and cloud, and the file storage and data services PowerScale customers know and trust.

With the addition of Dell APEX File Storage for AWS, PowerScale clusters can now be deployed anywhere your data is, including:

OneFS 9.6 extends the PowerScale hybrid cloud to AWS, providing the same OneFS software platform on-prem and in the cloud, and customer-managed for full control.

PowerScale’s scale-out architecture can now be deployed on customer managed AWS EBS and ECS infrastructure, providing the scale and performance needed to run a variety of unstructured workflows in the public cloud. Once in the cloud, existing PowerScale investments can be further leveraged by accessing and orchestrating your data through the platform’s multi-protocol access and APIs.

This includes the common OneFS control plane (CLI, WebUI, and platform API), and the same enterprise features:

Feature Description
CloudPools Cloud tiering to a choice of providers
Data reduction Data compression and deduplication, reducing storage costs
ISV ecosystem 250+ ISVs for OneFS
Multi-protocol access Global permissions structure shared across users and protocols
SmartConnect Policy-based client failover load balancing
SmartQuotas Quota management and thin provisioning
SnapshotIQ Fast, efficient data backup and recovery
SyncIQ Asynchronous replication for DR

The challenges and complexity of data locality are reduced by OneFS SyncIQ and SmartSync native replication between on-prem, cloud adjacent, and cloud-based clusters. As such, APEX File Storage for AWS enables workloads in the cloud with its clustered architecture providing linear capacity and performance scaling up to six SSD nodes and 1PiB per namespace/cluster, and up to 10GB/s reads and 4GB/s writes per cluster. As such, it can be a solid fit for traditional file shares and home directories, vertical workloads like M&E, healthcare, life sciences, and finserv, and next-gen AI, ML and analytics applications.

Hybrid cloud

APEX for file storage on AWS is ideal for moving IT workloads to the cloud to support archive, backup, file shares, home directories, etc:

  • Use AWS for off-prem DR
  • File workflows can be migrated to AWS without requiring changes to storage architecture
  • Consistent user experience with on-prem PowerScale
  • Use OneFS features like SnapshotIQ and SyncIQ to natively replicate to the cloud
  • Use the same multi-protocol data services in AWS that you use on-prem

Cloud bursting

When workloads run short of computing on-prem resources, burst the extra demands to AWS cloud services:

  • Support compute-intensive workloads such as M&E, manufacturing, life sciences, analytics and more:
  • Use cloud for burst performance to power workload resource spikes
  • Native data replication services to move data to the cloud
  • Proven scale-out architecture provides leading file performance
  • Leverage AWS services to accelerate outcomes and control costs

Cluster licensing is capacity-based, sold in 1TiB increments, and flexible to cover HDD and SSD deployments. Support is included with the general license, as are all the supported OneFS data management and protection services. Subscription terms include 1-year or 3-year options, and existing AWS customers can apply their AWS infrastructure credits towards APEX File Storage. Plus, licensing is also available in a TLA.

So, in summary, the key features of this new OneFS cloud offering include:

  • Native replication
  • OneFS enterprise features
  • Customer-managed solution
  • Same user experience as on-prem
  • Scalability up to 1 PiB in a single namespace
  • Up to six SSD nodes
  • Leading file performance

We’ll take a look at the underlying technology behind this new APEX File Storage on AWS cloud-based PowerScale solution in more detail in a future article.

OneFS Firewall Management and Troubleshooting

In the final article in this series, we’ll focus on step five of the OneFS firewall provisioning process and turn our attention to some of the management and monitoring considerations and troubleshooting tools associated with the firewall.

Management and monitoring of the firewall in OneFS 9.5 can be performed via the CLI, or platform API, or WebUI. Since data security threats come from inside an environment as well as out, such as from a rogue IT employee, a good practice is to constrain the use of all-powerful ‘root’, ‘administrator’, and ‘sudo’ accounts as much as possible. Instead of granting cluster admins full rights, a preferred approach is to use OneFS’ comprehensive authentication, authorization, and accounting framework.

OneFS role-based access control (RBAC) can be used to explicitly limit who has access to configure and monitor the firewall. A cluster security administrator selects the desired access zone, creates a zone-aware role within it, assigns privileges, and then assigns members. For example, from the WebUI under Access > Membership and roles > Roles:

When these members login to the cluster via a configuration interface (WebUI, Platform API, or CLI) they inherit their assigned privileges.

Accessing the firewall from the WebUI and CLI in OneFS 9.5 requires the new ISI_PRIV_FIREWALL administration privilege.

# isi auth privileges -v | grep -i -A 2 firewall

         ID: ISI_PRIV_FIREWALL

Description: Configure network firewall

       Name: Firewall

   Category: Configuration

 Permission: w

This privilege can be assigned one of four permission levels for a role, including:

Permission Indicator Description
No permission.
R Read-only permission.
X Execute permission.
W Write permission.

By default, the built-in ‘SystemAdmin’ roles is granted write privileges to administer the firewall, while the built-in ‘AuditAdmin’ role has read permission to view the firewall configuration and logs.

With OneFS RBAC, an enhanced security approach for a site could be to create two additional roles on a cluster, each with an increasing realm of trust. For example:

  1. An IT ops/helpdesk role with ‘read’ access to the snapshot attributes would permit monitoring and troubleshooting the firewall, but no changes:
RBAC Role Firewall Privilege Permission
IT_Ops ISI_PRIV_FIREWALL Read

For example:

# isi auth roles create IT_Ops

# isi auth roles modify IT_Ops --add-priv-read ISI_PRIV_FIREWALL

# isi auth roles view IT_Ops | grep -A2 -i firewall

             ID: ISI_PRIV_FIREWALL

     Permission: r

2. A Firewall Admin role would provide full firewall configuration and management rights:

RBAC Role Firewall Privilege Permission
FirewallAdmin ISI_PRIV_FIREWALL Write

For example:

# isi auth roles create FirewallAdmin

# isi auth roles modify FirewallAdmin –add-priv-write ISI_PRIV_FIREWALL

# isi auth roles view FirewallAdmin | grep -A2 -i firewall

ID: ISI_PRIV_FIREWALL

Permission: w

Note that when configuring OneFS RBAC, remember to remove the ‘ISI_PRIV_AUTH’ and ‘ISI_PRIV_ROLE’ privilege from all but the most trusted administrators.

Additionally, enterprise security management tools such as CyberArk can also be incorporated to manage authentication and access control holistically across an environment. These can be configured to frequently change passwords on trusted accounts (ie. every hour or so), require multi-Level approvals prior to retrieving passwords, as well as track and audit password requests and trends.

OneFS Firewall Limits

When working with the OneFS Firewall, there are some upper bounds to the configurable attributes to keep in mind. These include:

Name Value Description
MAX_INTERFACES 500 Maximum number of L2 interfaces including Ethernet, VLAN, LAGG interfaces on a node.
MAX _SUBNETS 100 Maximum number of subnets within a OneFS cluster
MAX_POOLS 100 Maximum number of network pools within a OneFS cluster
DEFAULT_MAX_RULES 100 Default value of maximum rules within a firewall policy
MAX_RULES 200 Upper limit of maximum rules within a firewall policy
MAX_ACTIVE_RULES 5000 Upper limit of total active rules across the whole cluster
MAX_INACTIVE_POLICIES 200 Maximum number of policies which are not applied to any network subnet or pool. They will not be written into ipfw table.

Firewall performance

Be aware that, while the OneFS firewall can greatly enhance the network security of a cluster, by nature of its packet inspection and filtering activity, it does come with a slight performance penalty (generally less than 5%).

Firewall and hardening mode

If OneFS STIG Hardening (ie. via ‘isi hardening apply’) is applied to a cluster with the OneFS Firewall disabled, the firewall will be automatically activated. On the other hand, if the firewall is already enabled, then there will be no change and it will remain active.

Firewall and user-configurable ports

Some OneFS services allow the TCP/UDP ports on which the daemon listens to be changed. These include:

Service CLI Command Default Port
NDMP isi ndmp settings global modify –port 10000
S3 isi s3 settings global modify –https-port 9020, 9021
SSH isi ssh settings modify –port 22

The default ports for these services are already configured in the associated global policy rules. For example, for the S3 protocol:

# isi network firewall rules list | grep s3

default_pools_policy.rule_s3                  55     Firewall rule on s3 service                                                             allow

# isi network firewall rules view default_pools_policy.rule_s3

          ID: default_pools_policy.rule_s3

        Name: rule_s3

       Index: 55

 Description: Firewall rule on s3 service

    Protocol: TCP

   Dst Ports: 9020, 9021

Src Networks: -

   Src Ports: -

      Action: allow

Note that the global policies, or any custom policies, do not auto-update if these ports are reconfigured. This means that the firewall policies must be manually updated when changing ports. For example, if the NDMP port is changed from 10000 to 10001:

# isi ndmp settings global view

                       Service: False

                          Port: 10000

                           DMA: generic

          Bre Max Num Contexts: 64

MSB Context Retention Duration: 300

MSR Context Retention Duration: 600

        Stub File Open Timeout: 15

             Enable Redirector: False

              Enable Throttler: False

       Throttler CPU Threshold: 50

# isi ndmp settings global modify --port 10001

# isi ndmp settings global view | grep -i port

                          Port: 10001

The firewall’s NDMP rule port configuration must also be reset to 10001:

# isi network firewall rule list | grep ndmp

default_pools_policy.rule_ndmp                44     Firewall rule on ndmp service                                                           allow

# isi network firewall rule modify default_pools_policy.rule_ndmp --dst-ports 10001 --live

# isi network firewall rule view default_pools_policy.rule_ndmp | grep -i dst

   Dst Ports: 10001

Note that the ‘–live’ flag is specified to enact this port change immediately.

Firewall and source-based routing

Under the hood, OneFS source-based routing (SBR) and the OneFS Firewall both leverage ‘ipfw’. As such, SBR and the firewall share the single ipfw table in the kernel. However, the two features use separate ipfw table partitions.

This allows SBR and the firewall to be activated independently of each other. For example, even if the firewall is disabled, SBR can still be enabled and any configured SBR rules displayed as expected (ie. via ‘ipfw set 0 show’).

Firewall and IPv6

Note that the firewall’s global default policies have a rule allowing ICMP6 by default. For IPv6 enabled networks, ICMP6 is critical for the functioning of NDP (Neighbor Discovery Protocol). As such, when creating custom firewall policies and rules for IPv6-enabled network subnets/pools, be sure to add a rule allowing ICMP6 to support NDP. As discussed in a previous article, an alternative (and potentially easier) approach is to clone a global policy to a new one and just customize its ruleset instead.

Firewall and FTP

The OneFS FTP service can work in two modes: Active and Passive. Passive mode is the default, where FTP data connections are created on top of random ephemeral ports. However, since the OneFS firewall requires fixed ports to operate, it only supports the FTP service in active mode. Attempts to enable the firewall with FTP running in passive mode will generate the following warning:

# isi ftp settings view | grep -i active

          Active Mode: No

# isi network firewall settings modify --enabled yes

FTP service is running in Passive mode. Enabling network firewall will lead to FTP clients having their connections blocked. To avoid this, please enable FTP active mode and ensure clients are configured in active mode before retrying. Are you sure you want to proceed and enable network firewall? (yes/[no]):

In order to activate the OneFS firewall in conjunction with the FTP service, first ensure the FTP service is running in active mode before enabling the firewall. For example:

# isi ftp settings view | grep -i enable

  FTP Service Enabled: Yes

# isi ftp settings view | grep -i active

          Active Mode: No

# isi ftp setting modify –active-mode true

# isi ftp settings view | grep -i active

          Active Mode: Yes

# isi network firewall settings modify --enabled yes

Note: Verify FTP active mode support and/or firewall settings on the client side, too.

Firewall monitoring and troubleshooting

When it comes to monitoring the OneFS firewall, the following logfiles and utilities provide a variety of information and are a good source to start investigating an issue:

Utility Description
/var/log/isi_firewall_d.log Main OneFS firewall log file, which includes information from firewall daemon.
/var/log/isi_papi_d.log Logfile for platform AP, including Firewall related handlers.
isi_gconfig -t firewall CLI command that displays all firewall configuration info.
ipfw show CLI command which displays the ipfw table residing in the FreeBSD kernel.

Note that the above files and command output are automatically included in logsets generated by the ‘isi_gather_info’ data collection tool.

The isi_gconfig command can be run with the ‘-q’ flag to identify any values that are not at their default settings. For example, the stock (default) isi_firewall_d gconfig context will not report any configuration entries:

# isi_gconfig -q -t firewall

[root] {version:1}

The firewall can also be run in the foreground for additional active rule reporting and debug output. For example, first shut down the isi_firewall_d service:

# isi services -a isi_firewall_d disable

The service 'isi_firewall_d' has been disabled.

Next, start up the firewall with the ‘-f’ flag.

# isi_firewall_d -f

Acquiring kevents for flxconfig

Acquiring kevents for nodeinfo

Acquiring kevents for firewall config

Initialize the firewall library

Initialize the ipfw set

ipfw: Rule added by ipfw is for temporary use and will be auto flushed soon. Use isi firewall instead.

cmd:/sbin/ipfw set enable 0 normal termination, exit code:0

isi_firewall_d is now running

Loaded master FlexNet config (rev:312)

Update the local firewall with changed files: flx_config, Node info, Firewall config

Start to update the firewall rule...

flx_config version changed!                             latest_flx_config_revision: new:312, orig:0

node_info version changed!                              latest_node_info_revision: new:1, orig:0

firewall gconfig version changed!                               latest_fw_gconfig_revision: new:17, orig:0

Start to update the firewall rule for firewall configuration (gconfig)

Start to handle the firewall configure (gconfig)

Handle the firewall policy default_pools_policy

ipfw: Rule added by ipfw is for temporary use and will be auto flushed soon. Use isi firewall instead.

32043 allow tcp from any to any 10000 in

cmd:/sbin/ipfw add 32043 set 8 allow TCP from any  to any 10000 in  normal termination, exit code:0

ipfw: Rule added by ipfw is for temporary use and will be auto flushed soon. Use isi firewall instead.

32044 allow tcp from any to any 389,636 in

cmd:/sbin/ipfw add 32044 set 8 allow TCP from any  to any 389,636 in  normal termination, exit code:0

Snip...

If the OneFS firewall is enabled and some network traffic is blocked, either this or the ‘ipfw show’ CLI command will often provide the first clues.

Please note that the ‘ipfw’ command should NEVER be used to modify the OneFS firewall table!

For example, say a rule is added to the default pools policy denying traffic on port 9876 from all source networks (0.0.0.0/0):

# isi network firewall rules create default_pools_policy.rule_9876 --index=100 --dst-ports 9876 --src-networks 0.0.0.0/0 --action deny –live

# isi network firewall rules view default_pools_policy.rule_9876

          ID: default_pools_policy.rule_9876

        Name: rule_9876

       Index: 100

 Description:

    Protocol: ALL

   Dst Ports: 9876

Src Networks: 0.0.0.0/0

   Src Ports: -

      Action: deny

Running ‘ipfw show’ and grepping for the port will show this new rule:

# ipfw show | grep 9876

32099            0               0 deny ip from any to any 9876 in

The ‘ipfw show’ command output also reports the statistics of how many IP packets have matched each rule This can be incredibly useful when investigating firewall issues. For example, a telnet session is initiated to the cluster on port 9876 from a client:

# telnet 10.224.127.8 9876

Trying 10.224.127.8...

telnet: connect to address 10.224.127.8: Operation timed out

telnet: Unable to connect to remote host

The connection attempt will time out since the port 9876 ‘deny’ rule will silently drop the packets. At the same time, the ‘ipfw show’ command will increment its counter to report on the denied packets. For example:

# ipfw show | grep 9876

32099            9             540 deny ip from any to any 9876 in

If this behavior is not anticipated or desired, the rule name can be found searching the rules list for the port number, in this case port 9876:

# isi network firewall rules list | grep 9876

default_pools_policy.rule_9876                100                                                                deny

The offending rule can then be reverted to ‘allow’ traffic on port 9876:

# isi network firewall rules modify default_pools_policy.rule_9876 --action allow --live

Or easily deleted, if preferred:

# isi network firewall rules delete default_pools_policy.rule_9876 --live

Are you sure you want to delete firewall rule default_pools_policy.rule_9876? (yes/[no]): yes

OneFS Firewall Configuration – Part 2

In the previous article in this OneFS firewall series, we reviewed the upgrade, activation, and policy selection components of the firewall provisioning process.

Now, we turn our attention to the firewall rule configuration step of the process.

As stated previously, role-based access control (RBAC) explicitly limits who has access to manage the OneFS firewall. So ensure that the user account which will be used to enable and configure the OneFS firewall belongs to a role with the ‘ISI_PRIV_FIREWALL’ write privilege.

  1. Configuring Firewall Rules

Once the desired policy is created, the next step is to configure the rules. Clearly, the first step here is decide what ports and services need securing or opening, beyond the defaults.

The following CLI syntax will return a list of all the firewall’s default services, plus their respective ports, protocols, and aliases, sorted by ascending port number:

# isi network firewall services list

Service Name     Port  Protocol  Aliases

---------------------------------------------

ftp-data         20    TCP       -

ftp              21    TCP       -

ssh              22    TCP       -

smtp             25    TCP       -

dns              53    TCP       domain

                       UDP

http             80    TCP       www

                                 www-http

kerberos         88    TCP       kerberos-sec

                       UDP

rpcbind          111   TCP       portmapper

                       UDP       sunrpc

                                 rpc.bind

ntp              123   UDP       -

dcerpc           135   TCP       epmap

                       UDP       loc-srv

netbios-ns       137   UDP       -

netbios-dgm      138   UDP       -

netbios-ssn      139   UDP       -

snmp             161   UDP       -

snmptrap         162   UDP       snmp-trap

mountd           300   TCP       nfsmountd

                       UDP

statd            302   TCP       nfsstatd

                       UDP

lockd            304   TCP       nfslockd

                       UDP

nfsrquotad       305   TCP       -

                       UDP

nfsmgmtd         306   TCP       -

                       UDP

ldap             389   TCP       -

                       UDP

https            443   TCP       -

smb              445   TCP       microsoft-ds

hdfs-datanode    585   TCP       -

asf-rmcp         623   TCP       -

                       UDP

ldaps            636   TCP       sldap

asf-secure-rmcp  664   TCP       -

                       UDP

ftps-data        989   TCP       -

ftps             990   TCP       -

nfs              2049  TCP       nfsd

                       UDP

tcp-2097         2097  TCP       -

tcp-2098         2098  TCP       -

tcp-3148         3148  TCP       -

tcp-3149         3149  TCP       -

tcp-3268         3268  TCP       -

tcp-3269         3269  TCP       -

tcp-5667         5667  TCP       -

tcp-5668         5668  TCP       -

isi_ph_rpcd      6557  TCP       -

isi_dm_d         7722  TCP       -

hdfs-namenode    8020  TCP       -

isi_webui        8080  TCP       apache2

webhdfs          8082  TCP       -

tcp-8083         8083  TCP       -

ambari-handshake 8440  TCP       -

ambari-heartbeat 8441  TCP       -

tcp-8443         8443  TCP       -

tcp-8470         8470  TCP       -

s3-http          9020  TCP       -

s3-https         9021  TCP       -

isi_esrs_d       9443  TCP       -

ndmp             10000 TCP       -

cee              12228 TCP       -

nfsrdma          20049 TCP       -

                       UDP

tcp-28080        28080 TCP       -

---------------------------------------------

Total: 55

Similarly, the following CLI command will generate a list of existing rules and their associated policies, sorted in alphabetical order. For example, to show the first 5 rules:

# isi network firewall rules list –-limit 5

ID                                            Index  Description                                                                             Action

----------------------------------------------------------------------------------------------------------------------------------------------------

default_pools_policy.rule_ambari_handshake    41     Firewall rule on ambari-handshake service                                               allow

default_pools_policy.rule_ambari_heartbeat    42     Firewall rule on ambari-heartbeat service                                               allow

default_pools_policy.rule_catalog_search_req  50     Firewall rule on service for global catalog search requests                             allow

default_pools_policy.rule_cee                 52     Firewall rule on cee service                                                            allow

default_pools_policy.rule_dcerpc_tcp          18     Firewall rule on dcerpc(TCP) service                                                    allow

----------------------------------------------------------------------------------------------------------------------------------------------------

Total: 5

Both the ‘isi network firewall rules list’ and ‘isi network firewall services list’ commands also have a ‘-v’ verbose option, plus can return their output in csv, list, table, or json formats with the ‘–flag’.

The detailed info for a given firewall rule, in this case the default SMB rule, can be viewed with the following CLI syntax:

# isi network firewall rules view default_pools_policy.rule_smb

          ID: default_pools_policy.rule_smb

        Name: rule_smb

       Index: 3

 Description: Firewall rule on smb service

    Protocol: TCP

   Dst Ports: smb

Src Networks: -

   Src Ports: -

      Action: allow

Existing rules can be modified and new rules created and added into an existing firewall policy with the ‘isi network firewall rules create’ CLI syntax. Command options include:

Option Description
–action Allow, which mean pass packets.

Deny, which means silently drop packets.

Reject which means reply with ICMP error code.

id Specifies the ID of the new rule to create. The rule must be added to an existing policy. The ID can be up to 32 alphanumeric characters long and can include underscores or hyphens, but cannot include spaces or other punctuation. Specify the rule ID in the following format:

<policy_name>.<rule_name>

The rule name must be unique in the policy.

–index the rule index in the pool. the valid value is between 1 and 99. the lower value has the higher priority. if not specified, automatically go to the next available index (before default rule 100).
–live The live option must only be used when a user issues a command to create/modify/delete a rule in an active policy. Such changes will take effect immediately on all network subnets and pools associated with this policy. Using the live option on a rule in an inactive policy will be rejected, and an error message will be returned.
–protocol  Specify the protocol matched for the inbound packets.  Available value are tcp,udp,icmp,all.  if not configured, the default protocol all will be used.
–dst-ports   Specify the network ports/services provided in storage system which is identified by destination port(s). The protocol specified by –protocol will be applied on these destination ports.
–src-networks Specify one or more IP addresses with corresponding netmasks that are to be allowed by this firewall policy. The correct format for this parameter is address/netmask, similar to “192.0.2.128/25”. Multiple address/netmask pairs should be separated with commas. Use the value 0.0.0.0/0 for “any”.
–src-ports Specify the network ports/services provided in storage system which is identified by source port(s). The protocol specified by –protocol will be applied on these source ports.

Note that, unlike for firewall policies, there is no provision for cloning individual rules.

The following CLI syntax can be used to create new firewall rules. For example, to add ‘allow’ rules for the HTTP and SSH protocols, plus a ‘deny’ rule for port TCP 9876, into firewall policy fw_test1:

# isi network firewall rules create  fw_test1.rule_http  --index 1 --dst-ports http --src-networks 10.20.30.0/24,20.30.40.0/24 --action allow

# isi network firewall rules create  fw_test1.rule_ssh  --index 2 --dst-ports ssh --src-networks 10.20.30.0/24,20.30.40.0/16 --action allow

# isi network firewall rules create fw_test1.rule_tcp_9876 --index 3 --protocol tcp --dst-ports 9876  --src-networks 10.20.30.0/24,20.30.40.0/24 -- action deny

When a new rule is created in a policy, if the index value is not specified, it will automatically inherit the next available number in the series (ie. index=4 in this case).

# isi network firewall rules create fw_test1.rule_2049  --protocol udp -dst-ports 2049 --src-networks 30.1.0.0/16 -- action deny

For a more draconian approach, a ‘deny’ rule could be created using the match-everything ‘*’ wildcard for destination ports and a 0.0.0.0/0 network and mask, which would silently drop all traffic:

# isi network firewall rules create fw_test1.rule_1234  --index=100--dst-ports * --src-networks 0.0.0.0/0 --action deny

When modifying existing firewall rules, the following CLI syntax can be used, in this case to change the source network of an HTTP allow rule (index 1) in firewall policy fw_test1:

# isi network firewall rules modify fw_test1.rule_http --index 1  --protocol ip --dst-ports http --src-networks 10.1.0.0/16 -- action allow

Or to modify an SSH rule (index 2) in firewall policy fw_test1, changing the action from ‘allow’ to ‘deny’:

# isi network firewall rules modify fw_test1.rule_ssh --index 2 --protocol tcp --dst-ports ssh --src-networks 10.1.0.0/16,20.2.0.0/16 -- action deny

Also, to re-order the custom TCP 9876 rule form the earlier example from index 3 to index 7 in firewall policy fw_test1.

# isi network firewall rules modify fw_test1.rule_tcp_9876 --index 7

Note that all rules equal or behind index 7 will have their index values incremented by one.

When deleting a rule from a firewall policy, any rule reordering is handled automatically. If the policy has been applied to a network pool, the ‘–live’ option can be used to force the change to take effect immediately. For example, to delete the HTTP rule from the firewall policy ‘fw_test1’:

# isi network firewall policies delete fw_test1.rule_http --live

Firewall rules can also be created, modified and deleted within a policy from the WebUI by navigating to Cluster management > Firewall Configuration > Firewall Policies. For example, to create a rule that permits SupportAssist and Secure Gateway traffic on the 10.219.0.0/16 network:

Once saved, the new rule is then displayed in the Firewall Configuration page:

  1. Firewall management and monitoring.

In the next and final article in this series, we’ll turn our attention to managing, monitoring, and troubleshooting the OneFS firewall (step 5).

OneFS Firewall Configuration – Part 1

The new firewall in OneFS 9.5 enhances the security of the cluster and helps prevent unauthorized access to the storage system. When enabled, the default firewall configuration allows remote systems access to a specific set of default services for data, management, and inter-cluster interfaces (network pools).

The basic OneFS firewall provisioning process is as follows:

Note that role-based access control (RBAC) explicitly limits who has access to manage the OneFS firewall. In addition to the ubiquitous ‘root’, the cluster’s built-in SystemAdmin role has write privileges to configure and administer the firewall.

  1. Upgrade to OneFS 9.5

First, the cluster must be running OneFS 9.5 in order to provision the firewall.

If upgrading from an earlier release, the OneFS 9.5 upgrade must be committed before enabling the firewall.

Also, be aware that configuration and management of the firewall in OneFS 9.5 requires the new ISI_PRIV_FIREWALL administration privilege. This can be granted to a role with either  read-only or read-write privileges.

# isi auth privilege | grep -i firewall

ISI_PRIV_FIREWALL                   Configure network firewall

This privilege can be granted to a role with either  read-only or read-write permissions. By default, the built-in ‘SystemAdmin’ roles is granted write privileges to administer the firewall:

# isi auth roles view SystemAdmin | grep -A2 -i firewall

             ID: ISI_PRIV_FIREWALL

     Permission: w

Additionally, the built-in ‘AuditAdmin’ role has read permission to view the firewall configuration and logs, etc:

# isi auth roles view AuditAdmin | grep -A2 -i firewall

             ID: ISI_PRIV_FIREWALL

     Permission: r

Ensure that the user account which will be used to enable and configure the OneFS firewall belongs to a role with the ‘ISI_PRIV_FIREWALL’ write privilege.

  1. Activate Firewall

As mentioned previously, the OneFS firewall can be either ‘enabled’ or ‘disabled’, with the latter as the default state. The following CLI syntax will display the firewall’s global status – in this case ‘disabled’ (the default):

# isi network firewall settings view

Enabled: False

Firewall activation can be easily performed from the CLI as follows:

# isi network firewall settings modify --enabled true

# isi network firewall settings view

Enabled: True

Or from the WebUI under Cluster management > Firewall Configuration > Settings:

Note that the firewall is automatically enabled when STIG Hardening applied to a cluster.

  1. Pick policies

A cluster’s existing firewall policies can be easily viewed from the CLI with the following command:

# isi network firewall policies list

ID        Pools                    Subnets                   Rules
-----------------------------------------------------------------------------
fw_test1  groupnet0.subnet0.pool0  groupnet0.subnet1         test_rule1
-----------------------------------------------------------------------------
Total: 1

Or from the WebUI under Cluster management > Firewall Configuration > Firewall Policies:

The OneFS firewall offers four main strategies when it comes to selecting a firewall policy. These include:

  1. Retaining the default policy
  2. Reconfiguring the default policy
  3. Cloning the default policy and reconfiguring
  4. Creating a custom firewall policy

We’ll consider each of these strategies in order:

a.  Retaining the default policy

In many cases, the default OneFS firewall policy value will provide acceptable protection for a security conscious organization. In these instances, once the OneFS firewall has been enabled on a cluster, no further configuration is required, and the cluster administrators can move on to the management and monitoring phase.

The firewall policy for all front-end cluster interfaces (network pool) is ‘default’. While the default policy can be modified, be aware that this default policy is global. As such, any change against it will impact all network pools using this default policy.

The following table describes the default firewall policies that are assigned to each interface:

Policy Description
Default pools policy Contains rules for the inbound default ports for TCP and UDP services in OneFS.
Default subnets policy Contains rules for:

·         DNS port 53

·         Rule for ICMP

·         Rule for ICMP6

These can be viewed from the CLI as follows:

# isi network firewall policies view default_pools_policy

            ID: default_pools_policy

          Name: default_pools_policy

   Description: Default Firewall Pools Policy

Default Action: deny

     Max Rules: 100

         Pools: groupnet0.subnet0.pool0, groupnet0.subnet0.testpool1, groupnet0.subnet0.testpool2, groupnet0.subnet0.testpool3, groupnet0.subnet0.testpool4, groupnet0.subnet0.poolcava

       Subnets: -

         Rules: rule_ldap_tcp, rule_ldap_udp, rule_reserved_for_hw_tcp, rule_reserved_for_hw_udp, rule_isi_SyncIQ, rule_catalog_search_req, rule_lwswift, rule_session_transfer, rule_s3, rule_nfs_tcp, rule_nfs_udp, rule_smb, rule_hdfs_datanode, rule_nfsrdma_tcp, rule_nfsrdma_udp, rule_ftp_data, rule_ftps_data, rule_ftp, rule_ssh, rule_smtp, rule_http, rule_kerberos_tcp, rule_kerberos_udp, rule_rpcbind_tcp, rule_rpcbind_udp, rule_ntp, rule_dcerpc_tcp, rule_dcerpc_udp, rule_netbios_ns, rule_netbios_dgm, rule_netbios_ssn, rule_snmp, rule_snmptrap, rule_mountd_tcp, rule_mountd_udp, rule_statd_tcp, rule_statd_udp, rule_lockd_tcp, rule_lockd_udp, rule_nfsrquotad_tcp, rule_nfsrquotad_udp, rule_nfsmgmtd_tcp, rule_nfsmgmtd_udp, rule_https, rule_ldaps, rule_ftps, rule_hdfs_namenode, rule_isi_webui, rule_webhdfs, rule_ambari_handshake, rule_ambari_heartbeat, rule_isi_esrs_d, rule_ndmp, rule_isi_ph_rpcd, rule_cee, rule_icmp, rule_icmp6, rule_isi_dm_d




# isi network firewall policies view default_subnets_policy

            ID: default_subnets_policy

          Name: default_subnets_policy

   Description: Default Firewall Subnets Policy

Default Action: deny

     Max Rules: 100

         Pools: -

       Subnets: groupnet0.subnet0

         Rules: rule_subnets_dns_tcp, rule_subnets_dns_udp, rule_icmp, rule_icmp6

Or from the WebUI under Cluster Management > Firewall Configuration > Firewall Policies:

 

b.  Reconfiguring the default policy

Depending on an organization’s threat levels or security mandates, there may be a need to restrict access to certain additional IP addresses and/or management service protocols.

If the default policy is deemed insufficient, reconfiguring the default firewall policy can be a good option if only a small number of rule changes are required. The specifics of creating, modifying, and deleting individual firewall rules is covered later in this article (step 3 below).

Note that if new rule changes behave unexpectedly, or configurating the firewall generally goes awry, OneFS does provide a ‘get out of jail free’ card. In a pinch, the global firewall policy can be quickly and easily restored to its default values. This can be achieved with the following CLI syntax:

# isi network firewall reset-global-policy

This command will reset the global firewall policies to the original system defaults. Are you sure you want to continue? (yes/[no]):

 

Alternatively, the default policy can also be easily reverted from the WebUI too, by clicking the ‘Reset default policies’ button:

c.  Cloning the default policy and reconfiguring

Another option is cloning, which can be useful when batch modification or a large number of changes to the current policy are required. By cloning the default firewall policy, an exact copy of the existing policy and its rules is generated, but with a new policy name. For example:

# isi network firewall policies clone default_pools_policy clone_default_pools_policy

# isi network firewall policies list | grep -i clone

clone_default_pools_policy -

Cloning can also be initiated from the WebUI under Firewall Configuration > Firewall Policies > More Actions > Clone Policy:

Enter the desired name of the clone in the ‘Policy Name’ field in the pop-up window and click ‘Save’:

Once cloned, the policy can then be easily reconfigured to suit. For example, to modify the policy ‘fw_test1’ and change its default-action from deny-all to allow-all:

# isi network firewall policies modify fw_test1 --default--action allow-all

When modifying a firewall policy, the ‘–live’ option CLI option can be used to force it take effect immediately. Note that the ‘—live’ option is only valid when issuing a command to modify or delete an active custom policy and to modify default policy. Such changes will take effect immediately on all network subnets and pools associated with this policy. Using the live option on an inactive policy will be rejected, and an error message returned.

Options for creating or modifying a firewall policy include:

Option Description
–default-action Automatically add one rule to ‘deny all’ or ‘allow all’ to the bottom of the rule set for this created policy (Index = 100).
max-rule-num By default, each policy when created could have maximum 100 rules (include one default rule), so user could config maximum 99 rules.  User could expand the maximum rule number to a specified value. Currently this value is limited to 200 (and user could config maximum 199 rules).
–add-subnets  Specify the network subnet(s) to add to policy, separated by a comma.
–remove-subnets  Specify the networks subnets to remove from policy and fall back to global policy.
–add-pools  Specify the network pool(s) to add to policy, separated by a comma.
–remove-pools  Specify the networks pools to remove from policy and fall back to global policy.

When modifying firewall policies, OneFS prints the following warning to verify the changes and help avoid the risk of a self-induced denial-of-service:

# isi network firewall policies modify --pools groupnet0.subnet0.pool0 fw_test1

Changing the Firewall Policy associated with a subnet or pool may change the networks and/or services allowed to connect to OneFS. Please confirm you have selected the correct Firewall Policy and Subnets/Pools. Are you sure you want to continue? (yes/[no]): yes

Once again, having the following CLI command handy, plus console access to the cluster is always a prudent move:

# isi network firewall reset-global-policy

So adding network pools or subnets to a firewall policy will cause the previous policy to be removed from them. Similarly, adding network pools or subnets to the global default policy will revert any custom policy configuration they might have. For example, to apply the firewall policy fw_test1 to IP Pool groupnet0.subnet0.pool0 and groupnet0.subnet0.pool1:

# isi network pools view groupnet0.subnet0.pool0 | grep -i firewall

      Firewall Policy: default_pools_policy

# isi network firewall policies modify fw_test1 --add-pools groupnet0.subnet0.pool0, groupnet0.subnet0.pool1

# isi network pools view groupnet0.subnet0.pool0 | grep -i firewall

      Firewall Policy: fw_test1

Or to apply the firewall policy fw_test1 to IP Pool groupnet0.subnet0.pool0 and groupnet0.subnet0:

# isi network firewall policies modify fw_test1 --apply-subnet groupnet0.subnet0.pool0, groupnet0.subnet0

# isi network pools view groupnet0.subnet0.pool0 | grep -i firewall

 Firewall Policy: fw_test1

# isi network subnets view groupnet0.subnet0 | grep -i firewall

 Firewall Policy: fw_test1

To reapply global policy at any time, either add the pools to the default policy:

# isi network firewall policies modify default_pools_policy --add-pools groupnet0.subnet0.pool0, groupnet0.subnet0.pool1

# isi network pools view groupnet0.subnet0.pool0 | grep -i firewall

 Firewall Policy: default_subnets_policy

# isi network subnets view groupnet0.subnet1 | grep -i firewall

 Firewall Policy: default_subnets_policy

Or remove the pool from the custom policy:

# isi network firewall policies modify fw_test1 --remove-pools groupnet0.subnet0.pool0 groupnet0.subnet0.pool1

Firewall policies can also be managed on the desired network pool in the OneFS WebUI by navigating to Cluster configuration > Network configuration > External network > Edit pool details. For example:

Be aware that cloning is also not limited to the default policy, as clones can be made of any custom policies too. For example:

# isi network firewall policies clone clone_default_pools_policy fw_test1

d.  Creating a custom firewall policy

Alternatively, a custom firewall policy can also be created from scratch. This can be accomplished from the CLI using the following syntax, in this case to create a firewall policy named ‘fw_test1’:

# isi network firewall policies create fw_test1 --default-action deny

# isi network firewall policies view fw_test1

            ID: fw_test1

          Name: fw_test1

   Description:

Default Action: deny

     Max Rules: 100

         Pools: -

       Subnets: -

         Rules: -

Note that if a ‘default-action’ is not specified in the CLI command syntax, it will automatically default to deny.

Firewall policies can also be configured via the OneFS WebUI by navigating to Cluster management > Firewall Configuration > Firewall Policies > Create Policy:

However, in contrast to the CLI, if a ‘default-action’ is not specified when creating a policy in the WebUI, it will automatically default to ‘Allow’ instead, since the drop-down list works alphabetically.

If and when a firewall policy is no longer required, it can be swiftly and easily removed. For example, the following CLI syntax will delete the firewall policy ‘fw_test1’, clearing out any rules within this policy container:

# isi network firewall policies delete fw_test1

Are you sure you want to delete firewall policy fw_test1? (yes/[no]): yes

Note that the default global policies cannot be deleted.

# isi network firewall policies delete default_subnets_policy

Are you sure you want to delete firewall policy default_subnets_policy? (yes/[no]): yes

Firewall policy: Cannot delete default policy default_subnets_policy.
  1. Configuring Firewall Rules

In the next article in this series, we’ll turn our attention to configuring the OneFS firewall rule(s) (step 4).

OneFS Firewall

Among the array of security features introduced in OneFS 9.5 is a new host-based firewall. This firewall allows cluster administrators to configure policies and rules on a PowerScale cluster in order to meet the network and application management needs and security mandates of an organization.

The OneFS firewall protects the cluster’s external, or front-end, network and operates as a packet filter for inbound traffic. It is available upon installation or upgrade to OneFS 9.5, but is disabled by default in both cases. However, the OneFS STIG hardening profile automatically enables the firewall and the default policies, in addition to manual activation.

The firewall generally manages IP packet filtering in accordance with the OneFS Security Configuration Guide, especially in regards to the network port usage. Packet control is governed by firewall policies, which are comprised of one or more individual rules.

Item Description Match Action
Firewall Policy Each policy is a set of firewall rules. Rules are matched by index in ascending order Each policy has a default action.
Firewall Rule Each rule specifies what kinds of network packets should be matched by Firewall engine and what action should be taken upon them. Matching criteria includes protocol, source ports, destination ports, source network address) Options are ‘allow’, ‘deny’ or ‘reject’.

A security best practice is to enable the OneFS firewall using the default policies, with any adjustments as required. The recommended configuration process is as follows:

Step Details
1.  Access Ensure that the cluster uses a default SSH or HTTP port before enabling. The default firewall policies block all nondefault ports until you change the policies.
2.  Enable Enable the OneFS firewall.
3.  Compare Compare your cluster network port configurations against the default ports listed in Network port usage.
4.  Configure Edit the default firewall policies to accommodate any non-standard ports in use in the cluster. NOTE: The firewall policies do not automatically update when port configurations are changed.
5.  Constrain Limit access to the OneFS Web UI to specific administrator terminals

Under the hood, the OneFS firewall is built upon the ubiquitous ‘ipfirewall’, or ‘ipfw’, which is FreeBSD’s native stateful firewall, packet filter and traffic accounting facility.

Firewall configuration and management is via the CLI, or platform API, or WebUI and OneFS 9.5 introduces a new Firewall Configuration page to support this. Note that the firewall is only available once a cluster is already running OneFS 9.5 and the feature has been manually enabled, activating the isi_firewall_d service. The firewall’s configuration is split between gconfig, which handles the settings and policies, and the ipfw table, which stores the rules themselves.

The firewall gracefully handles any SmartConnect dynamic IP movement between nodes since firewall policies are applied per network pool. Additionally, being network pool based allows the firewall to support OneFS access zones and shared/multitenancy models.

The individual firewall rules, which are essentially simplified wrappers around ipfw rules, work by matching packets via the 5-tuples that uniquely identify an IPv4 UDP or TCP session:

  • Source IP address
  • Source port
  • Destination IP address
  • Destination port
  • Transport protocol

The rules are then organized within a firewall policy, which can be applied to one or more network pools.

Note that each pool can only have a single firewall policy applied to it. If there is no custom firewall policy configured for a network pool, it automatically uses the global default firewall policy.

When enabled, the OneFS firewall function is cluster wide, and all inbound packets from external interfaces will go through either the custom policy or default global policy before reaching the protocol handling pathways. Packets passed to the firewall are compared against each of the rules in the policy, in rule-number order. Multiple rules with the same number are permitted, in which case they are processed in order of insertion. When a match is found, the action corresponding to that matching rule is performed. A packet is checked against the active ruleset in multiple places in the protocol stack, and the basic flow is as follows:

  1. Get the logical interface for incoming packets
  2. Find all network pools assigned to this interface
  3. Compare these network pools one by one with destination IP address to find the matching pool (either custom firewall policy, or default global policy).
  4. Compare each rule with service (protocol & destination ports) & source IP address in this pool from in order of lowest index value.  If matched, perform actions according to the associated rule.
  5. If no rule matches, go to the final rule (deny all or allow all) which is specified upon policy creation.

The OneFS firewall automatically reserves 20,000 rules in the ipfw table for its custom and default policies and rules. By default, each policy can gave a maximum of 100 rules, including one default rule. This translates to an effective maximum of 99 user-defined rules per policy, because the default rule is reserved and cannot be modified. As such, a maximum of 198 policies can be applied to pools or subnets since the default-pools-policy and default-subnets-policy are reserved and cannot be deleted.

Additional firewall bounds and limits to keep in mind include:

Name Value Description
MAX_INTERFACES 500 Maximum number of Layer 2 interfaces per node (including Ethernet, VLAN, LAGG interfaces).
MAX _SUBNETS 100 Maximum number of subnets within a OneFS cluster
MAX_POOLS 100 Maximum number of network pools within a OneFS cluster
DEFAULT_MAX_RULES 100 Default value of maximum rules within a firewall policy
MAX_RULES 200 Upper limit of maximum rules within a firewall policy
MAX_ACTIVE_RULES 5000 Upper limit of total active rules across the whole cluster
MAX_INACTIVE_POLICIES 200 Maximum number of policies which are not applied to any network subnet or pool. They will not be written into ipfw table.

The firewall default global policy is ready to use out of box and, unless a custom policy has been explicitly configured, all network pools use this global policy. Custom policies can be configured by either cloning and modifying an existing policy or creating one from scratch.

Component Description
Custom policy A user-defined container with a set of rules. A policy can be applied to multiple network pools, but a network pool can only apply one policy.

 

Firewall rule An ipfw-like rule which can be used to restrict remote access. Each rule has an index which is valid within the policy. Index values range from 1 to 99, with lower numbers having higher priority. Source networks are described by IP and netmask, and services can be expressed either by port number (ie. 80) or service name (ie. http,ssh,smb). The ‘*‘ wildcard can also be used to denote all services. Supported actions include ‘allow’, ‘drop’ and ‘reject’.
Default policy A global policy to manage all default services, used for maintaining OneFS minimum running and management. While ‘Deny any‘ is the default action of the policy, the defined service rules have a default action to ‘allow all remote access’. All packets not matching any of the rules are automatically dropped.

Two default policies: 

·         default-pools-policy

·         default-subnets-policy

Note that these two default policies cannot be deleted, but individual rule modification is permitted in each.

Default services The firewall’s default pre-defined services include the usual suspects, such as: DNS, FTP, HDFS, HTTP, HTTPS, ICMP, NDMP, NFS, NTP, S3, SMB, SNMP, SSH, etc. A full listing is available via the ‘isi network firewall services list’ CLI command output.

For a given network pool, either the global policy or a custom policy is assigned and takes effect. Additionally, all configuration changes to either policy type are managed by gconfig and are persistent across cluster reboots.

In the next article in this series we’ll take a look at the configuration and management of the OneFS firewall.

OneFS Snapshot Security

In this era of elevated cyber-crime and data security threats, there is increasing demand for immutable, tamper-proof snapshots. Often this need arises as part of a broader security mandate, ideally proactively, but oftentimes as a response to a security incident. OneFS addresses this requirement in the following ways:

On-cluster Off-cluster
·         Read-only snapshots

·         Snapshot locks

·         Role-based administration

·         SyncIQ snapshot replication

·         Cyber-vaulting

 

  1. Read-only snapshots

At its core, OneFS SnapshotIQ generates read-only, point-in-time, space efficient copies of a defined subset of a cluster’s data.

Only the changed blocks of a file are stored when updating OneFS snapshots, ensuring efficient storage utilization. They are also highly scalable and typically take less than a second to create, while generating little performance overhead. As such, the RPO (recovery point objective) and RTO (recovery time objective) of a OneFS snapshot can be very small and highly flexible, with the use of rich policies and schedules.

OneFS Snapshots are created manually, via a scheduled, or automatically generated by OneFS to facilitate system operations. But whatever the generation method, once a snapshot has been taken, its contents cannot be manually altered.

  1. Snapshot Locks

In addition to snapshot contents immutability, for an enhanced level of tamper-proofing, SnapshotIQ also provides the ability to lock snapshots with the ‘isi snapshot locks’ CLI syntax. This prevents snapshots from being accidentally or unintentionally deleted.

For example, a manual snapshot, ‘snaploc1’ is taken of /ifs/test:

# isi snapshot snapshots create /ifs/test --name snaploc1

# isi snapshot snapshots list | grep snaploc1

79188 snaploc1                                     /ifs/test

A lock is then placed on it (in this case lock ID=1):

# isi snapshot locks create snaplock1

# isi snapshot locks list snaploc1

ID

----

1

----

Total: 1

Attempts to delete the snapshot fails because the lock prevents its removal:

# isi snapshot snapshots delete snaploc1

Are you sure? (yes/[no]): yes

Snapshot "snaploc1" can't be deleted because it is locked

The CLI command ‘isi snapshot locks delete <lock_ID>’ can be used to clear existing snapshot locks, if desired. For example,  to remove the only lock (ID=1) from snapshot ‘snaploc1’:

# isi snapshot locks list snaploc1

ID

----

1

----

Total: 1

# isi snapshot locks delete snaploc1 1

Are you sure you want to delete snapshot lock 1 from snaploc1? (yes/[no]): yes

# isi snap locks view snaploc1 1

No such lock

Once the lock is removed, the snapshot can then be deleted:

# isi snapshot snapshots delete snaploc1

Are you sure? (yes/[no]): yes

# isi snapshot snapshots list| grep -i snaploc1 | wc -l

       0

Note that a snapshot can have up to a maximum of sixteen locks on it at any time. Also, lock numbers are continually incremented and not recycled upon deletion.

Like snapshot expiry, snapshot locks can also have an expiry time configured. For example, to set a lock on snapshot ‘snaploc1’ that expires at 1am on April 1st April, 2024:

# isi snap lock create snaploc1 --expires '2024-04-01T01:00:00'

# isi snap lock list snaploc1

ID

----

36

----

Total: 1

# isi snap lock view snaploc1 33

     ID: 36

Comment:

Expires: 2024-04-01T01:00:00

  Count: 1

Note that if the duration period of a particular snapshot lock expires but others remain, OneFS will not delete that snapshot until all the locks on it have been deleted or expired.

The following table provides an example snapshot expiration schedule, with monthly locked snapshots to prevent deletion:

Snapshot Frequency Snapshot Time Snapshot Expiration Max Retained Snapshots
Every other hour Start at 12:00AM

End at 11:59AM

1 day 27
Every day At 12:00AM 1 week
Every week Saturday at 12:00AM 1 month
Every month First Saturday of month at 12:00AM Locked

3. Roles-based Access Control

Read-only snapshots plus locks present physically secure snapshots on a cluster. However, if you are able to login to the cluster and have the required elevated administrator privileges to do so, you can still remove locks and/or delete snapshots.

Since data security threats come from inside an environment as well as out, such as from a disgruntled IT employee or other internal bad actor, another key to a robust security profile is to constrain the use of all-powerful ‘root’, ‘administrator’, and ‘sudo’ accounts as much as possible. Instead, of granting cluster admins full rights, a preferred security best practice is to leverage the comprehensive authentication, authorization, and accounting framework that OneFS natively provides.

OneFS role-based access control (RBAC) can be used to explicitly limit who has access to manage and delete snapshots. This granular control allows administrative roles to be crafted which can create and manage snapshot schedules, but prevent their unlocking and/or deletion. Similarly, lock removal and snapshot deletion can be isolated to a specific security role (or to root only).

A cluster security administrator selects the desired access zone, creates a zone-aware role within it, assigns privileges, and then assigns members.

For example, from the WebUI under Access > Membership and roles > Roles:

When these members login to the cluster via a configuration interface (WebUI, Platform API, or CLI) they inherit their assigned privileges.

The specific privileges that can be used to segment OneFS snapshot management include:

Privilege Description
ISI_PRIV_SNAPSHOT_ALIAS Aliasing for snapshots
ISI_PRIV_SNAPSHOT_LOCKS Locking of snapshots from deletion
ISI_PRIV_SNAPSHOT_PENDING Upcoming snapshot based on schedules
ISI_PRIV_SNAPSHOT_RESTORE Restoring directory to a particular snapshot
ISI_PRIV_SNAPSHOT_SCHEDULES Scheduling for periodic snapshots
ISI_PRIV_SNAPSHOT_SETTING Service and access settings
ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT Manual snapshots and locks
ISI_PRIV_SNAPSHOT_SNAPSHOT_SUMMARY Snapshot summary and usage details

Each privilege can be assigned one of four permission levels for a role, including:

Permission Indicator Description
No permission.
R Read-only permission.
X Execute permission.
W Write permission.

The ability for a user to delete a snapshot is governed by the ‘ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT’ privilege.  Similarly, the ‘ISI_PRIV_SNAPSHOT_LOCKS’ governs lock creation and removal.

In the following example, the ‘snap’ role has ‘read’ rights for the ‘ISI_PRIV_SNAPSHOT_LOCKS’ privilege, allowing a user associated with this role to view snapshot locks:

# isi auth roles view snap | grep -I -A 1 locks

             ID: ISI_PRIV_SNAPSHOT_LOCKS

     Permission: r

--

# isi snapshot locks list snaploc1

ID

----

1

----

Total: 1

However, attempts to remove the lock ‘ID 1’ from the ‘snaploc1’ snapshot fail without write privileges:

# isi snapshot locks delete snaploc1 1

Privilege check failed. The following write privilege is required: Snapshot locks (ISI_PRIV_SNAPSHOT_LOCKS)

Write privileges are added to ‘ISI_PRIV_SNAPSHOT_LOCKS’ in the ‘’snaploc1’ role:

# isi auth roles modify snap –-add-priv-write ISI_PRIV_SNAPSHOT_LOCKS

# isi auth roles view snap | grep -I -A 1 locks

             ID: ISI_PRIV_SNAPSHOT_LOCKS

     Permission: w

--

This allows the lock ‘ID 1’ to be successfully deleted from the ‘snaploc1’ snapshot:

# isi snapshot locks delete snaploc1 1

Are you sure you want to delete snapshot lock 1 from snaploc1? (yes/[no]): yes

# isi snap locks view snaploc1 1

No such lock

Using OneFS RBAC, an enhanced security approach for a site could be to create three OneFS roles on a cluster, each with an increasing realm of trust:

a.  First, an IT ops/helpdesk role with ‘read’ access to the snapshot attributes would permit monitoring and troubleshooting, but no changes:

Snapshot Privilege Permission
ISI_PRIV_SNAPSHOT_ALIAS Read
ISI_PRIV_SNAPSHOT_LOCKS Read
ISI_PRIV_SNAPSHOT_PENDING Read
ISI_PRIV_SNAPSHOT_RESTORE Read
ISI_PRIV_SNAPSHOT_SCHEDULES Read
ISI_PRIV_SNAPSHOT_SETTING Read
ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT Read
ISI_PRIV_SNAPSHOT_SNAPSHOT_SUMMARY Read

b.  Next, a cluster admin role, with ‘read’ privileges for ‘ISI_PRIV_SNAPSHOT_LOCKS’ and ‘ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT’ would prevent snapshot and lock deletion, but provide ‘write’ access for schedule configuration, restores, etc..

Snapshot Privilege Permission
ISI_PRIV_SNAPSHOT_ALIAS Write
ISI_PRIV_SNAPSHOT_LOCKS Read
ISI_PRIV_SNAPSHOT_PENDING Write
ISI_PRIV_SNAPSHOT_RESTORE Write
ISI_PRIV_SNAPSHOT_SCHEDULES Write
ISI_PRIV_SNAPSHOT_SETTING Write
ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT Read
ISI_PRIV_SNAPSHOT_SNAPSHOT_SUMMARY Write

c.  Finally, a cluster security admin role (root equivalence) would provide full snapshot configuration and management, lock control, and deletion rights:

Snapshot Privilege Permission
ISI_PRIV_SNAPSHOT_ALIAS Write
ISI_PRIV_SNAPSHOT_LOCKS Write
ISI_PRIV_SNAPSHOT_PENDING Write
ISI_PRIV_SNAPSHOT_RESTORE Write
ISI_PRIV_SNAPSHOT_SCHEDULES Write
ISI_PRIV_SNAPSHOT_SETTING Write
ISI_PRIV_SNAPSHOT_SNAPSHOTMANAGEMENT Write
ISI_PRIV_SNAPSHOT_SNAPSHOT_SUMMARY Write

Note that when configuring OneFS RBAC, remember to remove the ‘ISI_PRIV_AUTH’ and ‘ISI_PRIV_ROLE’ privilege from all but the most trusted administrators.

Additionally, enterprise security management tools such as CyberArk can also be incorporated to manage authentication and access control holistically across an environment. These can be configured to frequently change passwords on trusted accounts (ie. every hour or so), require multi-Level approvals prior to retrieving passwords, as well as track and audit password requests and trends.

While this article focuses exclusively on OneFS snapshots, the expanded use of RBAC granular privileges for enhanced security is germane to most key areas of cluster management and data protection, such as SyncIQ replication, etc.

  1. Snapshot replication

In addition to utilizing snapshots for its own checkpointing system, SyncIQ, the OneFS data replication engine, supports snapshot replication to a target cluster.

OneFS SyncIQ replication policies contain an option for triggering a replication policy when a snapshot of the source directory is completed. Additionally, at the onset of a new policy configuration, when the “Whenever a Snapshot of the Source Directory is Taken” option is selected, a checkbox appears to enable any existing snapshots in the source directory to be replicated. More information is available in this SyncIQ paper.

  1. Cyber-vaulting

File data is arguably the most difficult to protect, because:

  • It is the only type of data where potentially all employees have a direct connection to the storage (with the other type of storage it’s via an application)
  • File data is linked (or mounted) to the operating system of the client. This means that it’s sufficient to gain file access to the OS to get access to potentially critical data.
  • Users are the largest breach points that happen.

The Cyber Security Framework (CSF) from the National Institute of Standards and Technology (NIST) categorizes the threat through recovery process:

Within the ‘Protect’ phase, there are two core aspects:

  • Applying all the core protection features available on the OneFS platform, namely:
Feature Description
Access control Where the core data protection functions are being executed. Assess who actually needs write access.
Immutability Having immutable snapshots, replica versions, etc. Augmenting backup strategy with an archiving strategy with SmartLock WORM.
Encryption Encrypting both data in-flight and data at rest.
Anti-virus Integrating with anti-virus/anti-malware protection that does content inspection.
Security advisories Dell Security Advisories (DSA) inform about fixes to common vulnerabilities and exposures.

 

  • Data isolation provides a last resort copy of business critical data, and can be achieved by using an air gap to isolate the cyber vault copy of the data. The vault copy is logically separated from the production copy of the data. Data syncing happens only intermittently by closing the airgap after ensuring there are no known issues.

The combination of OneFS snapshots and SyncIQ replication allows for granular data recovery. This means that only the affected files are recovered, while the most recent changes are preserved for the unaffected data. While an on-prem air-gapped cyber vault can still provide secure network isolation, in the event of an attack, the ability to failover to a fully operational ‘clean slate’ remote site provides additional security and peace of mind.

We’ll explore PowerScale cyber protection and recovery in more depth in a future article.