OneFS Customizable CELOG Alerts

Another feature enhancement that is introduced in the new OneFS 9.1 release is customizable CELOG event thresholds. This new functionality allows cluster administrators to customize the alerting thresholds for several filesystem capacity-based events. These new configurable events and their default threshold values include:

These event thresholds can be easily set from the OneFS WebUI, CLI, or platform API. For configuration via the WebUI, browse to Cluster Management > Events and Alerts > Thresholds, as follows:

The desired event can be configured from the OneFS WebUI by clicking on the associated ‘Edit Thresholds’ button. For example, to lower the thresholds for the FILESYS_FDUSAGE event critical threshold from 95 to 92%:

Note that none of an event’s thresholds can have an equal value to each other. Plus an informational must be lower than warning and critical must be higher than warning. For example:

Alternatively, event threshold configuration can also be performed via the OneFS CLI ‘isi event thresholds’ command set . For example:

The list of configurable CELOG events can be displayed with the following CLI command:

# isi event threshold list
ID ID Name
-------------------------------
100010001 SYS_DISK_VARFULL
100010002 SYS_DISK_VARCRASHFULL
100010003 SYS_DISK_ROOTFULL
100010015 SYS_DISK_POOLFULL
100010018 SYS_DISK_SSDFULL
600010005 SNAP_RESERVE_FULL
800010006 FILESYS_FDUSAGE
-------------------------------

Full details, including the thresholds, are shown with the addition of the ‘-v’ verbose flag:

# isi event threshold list -v
ID: 100010001
ID Name: SYS_DISK_VARFULL
Description: Percentage at which /var partition is near capacity
Defaults: info (75%), warn (85%), crit (90%)
Thresholds: info (75%), warn (85%), crit (90%)
--------------------------------------------------------------------------------
ID: 100010002
ID Name: SYS_DISK_VARCRASHFULL
Description: Percentage at which /var/crash partition is near capacity
Defaults: warn (90%)
Thresholds: warn (90%)
--------------------------------------------------------------------------------
ID: 100010003
ID Name: SYS_DISK_ROOTFULL
Description: Percentage at which /(root) partition is near capacity
Defaults: warn (90%), crit (95%)
Thresholds: warn (90%), crit (95%)
--------------------------------------------------------------------------------
ID: 100010015
ID Name: SYS_DISK_POOLFULL
Description: Percentage at which a nodepool is near capacity
Defaults: info (70%), warn (80%), crit (90%), emerg (97%)
Thresholds: info (70%), warn (80%), crit (90%), emerg (97%)
--------------------------------------------------------------------------------
ID: 100010018
ID Name: SYS_DISK_SSDFULL
Description: Percentage at which an SSD drive is near capacity
Defaults: info (75%), warn (85%), crit (90%)
Thresholds: info (75%), warn (85%), crit (90%)
--------------------------------------------------------------------------------
ID: 600010005
ID Name: SNAP_RESERVE_FULL
Description: Percentage at which snapshot reserve space is near capacity
Defaults: warn (90%), crit (99%)
Thresholds: warn (90%), crit (99%)
--------------------------------------------------------------------------------
ID: 800010006
ID Name: FILESYS_FDUSAGE
Description: Percentage at which the system is near capacity for open file descriptors
Defaults: info (85%), warn (90%), crit (95%)
Thresholds: info (85%), warn (90%), crit (95%)

Similarly, the following CLI syntax can be used to display the existing thresholds for a particular event – in this case the SYS_DISK_VARFULL /var partition full alert:

# isi event thresholds view 100010001

         ID: 100010001

    ID Name: SYS_DISK_VARFULL

Description: Percentage at which /var partition is near capacity

   Defaults: info (75%), warn (85%), crit (90%)

 Thresholds: info (75%), warn (85%), crit (90%)

The following command will reconfigure the threshold from the defaults of 75%|85%|95% to 70%|75%|85%:

# isi event thresholds modify 100010001 --info 70 --warn 75 --crit 85

# isi event thresholds view 100010001

         ID: 100010001

    ID Name: SYS_DISK_VARFULL

Description: Percentage at which /var partition is near capacity

   Defaults: info (75%), warn (85%), crit (90%)

 Thresholds: info (70%), warn (75%), crit (85%)

And finally, to reset the thresholds back to their default values:

#  isi event thresholds reset 100010001

Are you sure you want to reset info, warn, crit from event 100010001?? (yes/[no]): yes

# isi event thresholds view 100010001

         ID: 100010001

    ID Name: SYS_DISK_VARFULL

Description: Percentage at which /var partition is near capacity

   Defaults: info (75%), warn (85%), crit (90%)

 Thresholds: info (75%), warn (85%), crit (90%)

Configuring OneFS SyncIQ Encryption

Unlike previous OneFS versions, SyncIQ is disabled by default in OneFS 9.1 and later. Once SyncIQ has been enabled by the cluster admin, a global encryption flag is automatically set, requiring all SyncIQ policies to be encrypted. Similarly, when upgrading a PowerScale cluster to OneFS 9.1, the global encryption flag is also set. However, be aware that the global encryption flag is not enabled on clusters configured with any existing SyncIQ policies upon upgrade to OneFS 9.1 or later releases.

The following procedure can be used to configure SyncIQ encryption from the OneFS CLI:

  1. Ensure both source and target clusters are running OneFS 8.2 or later.
  2. Next, create an X.509 certificates, one for each of the source and target clusters, and signed by a certificate authority.
Certificate Type Abbreviation
Certificate Authority <ca_cert_id>
Source Cluster Certificate <src_cert_id>
Target Cluster Certificate <tgt_cert_id>

These can be generated using publicly available tools, such as OpenSSL: http://slproweb.com/products/Win32OpenSSL.html.

  1. Add the newly created certificates to the appropriate source cluster stores. Each cluster gets certificate authority, its own certificate, and its peer’s certificate:
# isi sync certificates server import <src_cert_id> <src_key>

# isi sync certificates peer import <tgt_cert_id>

# isi cert authority import <ca_cert_id>
  1. On the source cluster, set the SyncIQ cluster certificate:
# isi sync settings modify --cluster-certificate-id=<src_cert_id>
  1. Add the certificates to the appropriate target cluster stores:
# isi sync certificates server import <tgt_cert_id> <tgt_key>

# isi sync certificates peer import <src_cert_id>

# isi cert authority import <ca_cert_id>
  1. On the target cluster, set the SyncIQ cluster certificate:
# isi sync settings modify --cluster-certificate-id=<tgt_cert_id>
  1. A global option is available in OneFS 9.1, requiring that all incoming and outgoing SyncIQ policies are encrypted. Be aware that executing this command impacts any existing SyncIQ policies that may not have encryption enabled. Only execute this command once all existing policies have encryption enabled. Otherwise, existing policies that do not have encryption enabled will fail. To enable this, execute the following command:
# isi sync settings modify --encryption-required=True
  1. On the source cluster, create an encrypted SyncIQ policy:
# isi sync policies create <pol_name> sync <src_dir> <target_ip> <tgt_dir> --target-certificate-id=<tgt_cert_id>

Or modify an existing policy on the source cluster:

# isi sync policies modify <pol_name> --target-certificate-id=<tgt_cert_id>

OneFS 9.1 also facilitates SyncIQ encryption configuration via the OneFS WebUI, in addition to CLI. For the source, server certificates can be added and managed by navigating to Data Protection > SyncIQ > Settlings and clicking on the ‘add certificate’ button:

And certificates can be imported onto the target cluster by browsing to Data Protection > SyncIQ > Certificates and clicking on the ‘add certificate’ button. For example:

So that’s what’s required to get encryption configured across a pair of clusters. There are several addition optional encryption configuration parameters available. These include:

  • Updating the policy to use a specified SSL cipher suite:
# isi sync policies modify <pol_name> --encryption-cipher-list=<suite>
  • Configuring the target cluster to check the revocation status of incoming certificates:
# isi sync settings modify --ocsp-address=<address> --ocsp-issuer-certificate-id=<ca_cert_id>
  • Modifying how frequently encrypted connections are renegotiated on a cluster:
# isi sync settings modify --renegotiation-period=24H
  • Requiring that all incoming and outgoing SyncIQ policies are encrypted:
# isi sync settings modify --encryption-required=True

To troubleshoot SyncIQ encryption, first check the reports for the SyncIQ policy in question. The reason for the failure should be indicated in the report. If the issue was due to a TLS authentication failure, then the error message from the TLS library will also be provided in the report. Also, more detailed information can often be found in /var/log/messages on the source and target clusters, including:

  • ID of the certificate that caused the failure.
  • Subject name of the certificate that caused the failure.
  • Depth at which the failure occurred in the certificate chain.
  • Error code and reason for the failure.

Before enabling SyncIQ encryption, be aware of the potential performance implications. While encryption only adds minimal overhead to the transmission, it may still negatively impact a production workflow. Be sure to test encrypted replication in a lab environment that emulates the environment before deploying in production.

Note that both the source and target cluster must be upgraded and committed to OneFS 8.2 or later, prior to configuring SyncIQ encryption.

In the event that SyncIQ encryption needs to be disabled, be aware that this can only be performed via the CLI and not the WebUI:

# isi sync settings modify --encryption-required=false

If encryption is disabled under OneFS 9.1, the following warnings will be displayed on creating a SyncIQ policy.

From the WebUI:

And via the CLI:

# isi sync policies create pol2 sync /ifs/data 192.168.1.2 /ifs/data/pol1

********************************************

WARNING: Creating a policy without encryption is dangerous.

Are you sure you want to create a SyncIQ policy without setting encryption?

Your data could be vulnerable without encrypted protection.

Type ‘confirm create policy’ to proceed.  Press enter to cancel:

OneFS SyncIQ and Encrypted Replication

Introduced in OneFS 9.1, SyncIQ encryption is integral in protecting data in-flight during inter-cluster replication over the WAN. This helps prevent man-in-the-middle attacks,  mitigating remote replication security concerns and risks.

SyncIQ encryption helps to secure data transfer between OneFS clusters, benefiting customers who undergo regular security audits and/or government regulations.

  • SyncIQ policies support end-to-end encryption for cross-cluster communications.
  • Certificates are easy to manage with the SyncIQ certificate store.
  • Certificate revocation is supported through the use of an external OCSP responder.
  • Clusters now require that all incoming and outgoing SyncIQ policies be encrypted through a simple configuration change in the SyncIQ global settings.

SyncIQ encryption relies on cryptography, using a public and private key pair to encrypt and decrypt replication sessions. These keys are mathematically related: Data encrypted with one key is decrypted with other key, confirming the identity of each cluster. SyncIQ uses the common X.509 Public Key Infrastructure (PKI) standard which defines certificate requirements.

A Certificate Authority (CA) serves as a trusted 3rd party, which issues and revokes certificates. Each cluster’s certificate store has the CA, it’s certificate, and the peer’s certificate, establishing a trusted ‘passport’ mechanism.

A SyncIQ job can attempt either an encrypted or unencrypted handshake:

Under the hood, SyncIQ utilizes TLS protocol version 1.2 and OpenSSL version: 1.0.2o. Customers are responsible for creating their own X.509 certificates, and SyncIQ peers must store each other’s end entity certificates. A TLS authentication failure will cause the corresponding SyncIQ job to immediately fail, and a CELOG event notifies the user of a SyncIQ encryption failure.

One the source cluster, the SyncIQ job’s coordinator process passes the target cluster’s public cert to its primary worker (pworker) process. The target monitor and sworker threads receive a list of approved source cluster certs. The pworkers can then establish secure connections with their corresponding sworkers (secondary workers).

SyncIQ traffic encryption is enabled on a per-policy basis. The CLI includes the ‘isi certificates’ and ‘isi sync certificates’ commands for the configuration of TLS certificates:

# isi cert -h

Description:

    Configure cluster TLS certificates.

Required Privileges:

    ISI_PRIV_CERTIFICATE

Usage:

    isi certificate <subcommand>

        [--timeout <integer>]

        [{--help | -h}]

Subcommands:

  Certificate Management:

    authority    Configure cluster TLS certificate authorities.

    server       Configure cluster TLS server certificates.

    settings     Configure cluster TLS certificate settings.

The following policy configuration fields are included:

Config Field Detail
–target-certificate-id <string> The ID of the target cluster certificate being used for encryption.
–ocsp_issuer_certificate-id <string> The ID of the certificate authority that issued the certificate whose revocation status is being checked.
–ocsp-address <string> The address of the OCSP responder to which to connect.
–encryption-cipher-list <string> The cipher list being used with encryption. For SyncIQ targets, this list serves as a list of supported ciphers. For SyncIQ sources, the list of ciphers will be attempted to be used in order.

In order to configure a policy for encryption the ‘–target-certificate-id’ must be specified. The users will input the ID of the desired certificate as is defined in the certificate manager. If self-signed certificates are being utilized, then they will have been manually copied to their peer cluster’s certificate store.

For authentication, there is a strict comparison of the public certs to the expected values. If a cert chain (that has been signed by the CA) is selected to authenticate the connection, the chain of certificates will need to be added to the cluster’s certificate authority store. Both methods use the ‘SSL VERIFY FAIL IF NO PEER CERT’ option when establishing the SSL context. Note that once encryption is enabled (by setting the appropriate policy fields), modification of the certificate IDs is allowed. However, removal and reverting to unencrypted syncs will prompt for confirmation before proceeding.

We’ll take a look at the SyncIQ encryption configuration procedures and options in the second article of this series.

OneFS Fast Reboots

As part of engineering’s on-going PowerScale ‘always-on’ initiative, OneFS offers a fast reboot service, that focuses on decreasing the duration, and lessening the impact, of planned node reboots on clients. It does this by automatically reducing the size of the lock cache on all nodes before a group change event.

By shortening group change window times, this new faster reboot service will be extremely advantageous to cluster upgrades and planned shutdowns, by helping to alleviate the window of unavailability for clients connected to a rebooting node.

The fast reboot service is automatically enabled on installation or upgrade to OneFS 9.1, and it requires no further configuration. However, be aware that it will only begin to apply for upgrades, when moving from 9.1 to a future release.

Under the hood, this feature works by proactively de-staging all the lock management work, and removing it from the client latency path. This means that the time taken during group change activity – handling the locks, negotiating which coordinator has which lock, etc – is moved to an earlier window of time in the process. So, for example, for a planned cluster reboot or shutdown, instead of doing a lock dance during the group change window, the lazy lock queue is proactively drained for a period of up to 5 minutes, in order to move that activity to earlier in the process. This directly benefits OneFS upgrades, by shrinking the time for the actual group change. For a typical size cluster, this is reduced to approximately 1 second – down from around 17 seconds in prior releases. And engineering have been testing this feature with up to 5 million locks per domain.

There are several useful new and updated sysctls that indicate the status of the reboot service.

Firstly, efs.gmp.group has been enhanced to include both reboot and draining fields, that confirm which node(s) the reboot service is active on, and whether locks are being drained:

# sysctl efs.gmp.group efs.gmp.group: <35baa7> (3) :{ 1-3:0-5, nfs: 3, isi_cbind_d: 1-3, lsass: 1-3, drain: 1, reboot: 1 }

To complement this, the lki_draining sysctl confirms whether draining is still occurring:

# sysctl efs.lk.lki_draining

efs.lk.lki_draining: 1

OneFS has around 20 different lock domains, each with its own queue. These queues each contain lazy locks, which are locks that are not currently in use, but are just being held by the node in case it needs to use them again.

The stats from the various lock domain queues are aggregated, and displayed as a current total by the lazy_queue_size  sysctl:

# sysctl efs.lk.lazy_queue_size

efs.lk.lazy_queue_size: 460658

And finally, to indicates whether any of the lazy queues are above their reboot threshold:

# sysctl efs.lk.lazy_queue_above_reboot

efs.lk.lazy_queue_above_reboot: 0

In addition to the sysctls, and to aid with troubleshooting and debugging, the reboot service writes its status information about the locks being drained, etc, to /var/log/isi_shutdown.log.

As we can see in the first example, the node has activated the reboot service and is waiting for the lazy queues to be drained. And these messages are printed every 60 seconds until complete.

Once done, a log message is then written confirming that the lazy queues have been drained, and that the node is about to reboot or shutdown.

So there you have it – the new faster reboot service and low-impact group changes, completing the next milestone in the OneFS ‘always on’ journey.

Introducing OneFS 9.1

Dell PowerScale OneFS version 9.1 has been released and is now generally available for download and cluster installation and upgrade.

This new OneFS 9.1 release embraces the PowerScale tenants of simplified management, increased performance, and extended flexibility, and introduces the following new features:

  • CAVA-based anti-virus support
  • Granular configuration of node and cluster-level Event and alerting
  • Improved restart of backups for better RTO and RPO
  • Faster performance for access to CloudPools tiered files
  • Faster detection and resolution of node or resource unavailability
  • Flexible audit configuration for compliance and business needs
  • Encryption of replication traffic for increased security
  • Simplified in-product license activation for clusters connected via SRS

We’ll be looking more closely at this new OneFS 9.1 functionality in forthcoming blog articles.

OneFS SmartDedupe – Assessment & Estimation

To complement the actual SmartDedupe job, a dry-run Dedupe Assessment job is also provided to help estimate the amount of space savings that will be seen by running deduplication on a particular directory or set of directories. The dedupe assessment job reports a total potential space savings. The dedupe assessment does not differentiate the case of a fresh run from the case where a previous dedupe job has already done some sharing on the files in that directory. The assessment job does not provide the incremental differences between instances of this job. Isilon recommends that the user should run the assessment job once on a specific directory prior to starting an actual dedupe job on that directory.

The assessment job runs similarly to the actual dedupe job, but uses a separate configuration. It also does not require a product license and can be run prior to purchasing SmartDedupe in order to determine whether deduplication is appropriate for a particular data set or environment. This can be configured from the WebUI by browsing to File System > Deduplication > Settings and adding the desired directories(s) in the ‘Assess Deduplication’ section.


Alternatively, the following CLI syntax will achieve the same result:

# isi dedupe settings modify –add-assess-paths /ifs/data

Once the assessment paths are configured, the job can be run from either the CLI or WebUI. For example:

Or, from the CLI:

# isi job types list | grep –I assess

DedupeAssessment   Yes      LOW  

# isi job jobs start DedupeAssessment

Once the job is running, it’s progress and be viewed by first listing the job to determine it’s job ID.

# isi job jobs list

ID   Type             State   Impact  Pri  Phase  Running Time

---------------------------------------------------------------

919  DedupeAssessment Running Low     6    1/1    -

---------------------------------------------------------------

Total: 1

And then viewing the job ID as follows:

# isi job jobs view 919

               ID: 919

             Type: DedupeAssessment

            State: Running

           Impact: Low

           Policy: LOW

              Pri: 6

            Phase: 1/1

       Start Time: 2019-01-21T21:59:26

     Running Time: 35s

     Participants: 1, 2, 3

         Progress: Iteration 1, scanning files, scanned 61 files, 9 directories, 4343277 blocks, skipped 304 files, sampled 271976 blocks, deduped 0 blocks, with 0 errors and 0 unsuccessful dedupe attempts

Waiting on job ID: -

      Description: /ifs/data

The running job can also be controlled and monitored from the WebUI:

Under the hood, the dedupe assessment job uses a separate index table from the actual dedupe process. Plus, for the sake of efficiency, the assessment job also samples fewer candidate blocks than the main dedupe job, and obviously does not actually perform deduplication. This means that, often, the assessment will provide a slightly conservative estimate of the actually deduplication efficiency that’s likely to be achieved.

Using the sampling and consolidation statistics, the assessment job provides a report which estimates the total dedupe space savings in bytes. This can be viewed for the CLI using the following syntax:

# isi dedupe reports view 919

    Time: 2020-09-21T22:02:18

  Job ID: 919

Job Type: DedupeAssessment

 Reports

        Time: 2020-09-21T22:02:18

     Results:

Dedupe job report:{

    Start time = 2020-Sep-21:21:59:26

    End time = 2020-Sep-21:22:02:15

    Iteration count = 2

    Scanned blocks = 9567123

    Sampled blocks = 383998

    Deduped blocks = 2662717

    Dedupe percent = 27.832

    Created dedupe requests = 134004

    Successful dedupe requests = 134004

    Unsuccessful dedupe requests = 0

    Skipped files = 328

    Index entries = 249992

    Index lookup attempts = 249993

    Index lookup hits = 1

}

Elapsed time:                      169 seconds

Aborts:                              0

Errors:                              0

Scanned files:                      69

Directories:                        12

1 path:

/ifs/data

CPU usage:                         max 81% (dev 1), min 0% (dev 2), avg 17%

Virtual memory size:               max 341652K (dev 1), min 297968K (dev 2), avg 312344K

Resident memory size:              max 45552K (dev 1), min 21932K (dev 3), avg 27519K

Read:                              0 ops, 0 bytes (0.0M)

Write:                             4006510 ops, 32752225280 bytes (31235.0M)

Other jobs read:                   0 ops, 0 bytes (0.0M)

Other jobs write:                  41325 ops, 199626240 bytes (190.4M)

Non-JE read:                       1 ops, 8192 bytes (0.0M)

Non-JE write:                      22175 ops, 174069760 bytes (166.0M)

Or from the WebUI, by browsing to Cluster Management > Job Operations > Job Types:

As indicated, the assessment report for job # 919 in this case discovered the potential of 27.8% in data savings from deduplication.

Note that the SmartDedupe dry-run estimation job can be run without any licensing requirements, allowing an assessment of the potential space savings that a dataset might yield before making the decision to purchase the full product.

OneFS SmartDedupe – Performance Considerations

As with many things in life, deduplication is a compromise. In order to gain increased levels of storage efficiency, additional cluster resources (CPU, memory and disk IO) are utilized to find and execute the sharing of common data blocks.

Another important performance impact consideration with dedupe is the potential for data fragmentation. After deduplication, files that previously enjoyed contiguous on-disk layout will often have chunks spread across less optimal file system regions. This can lead to slightly increased latencies when accessing these files directly from disk, rather than from cache.

To help reduce this risk, SmartDedupe will not share blocks across node pools or data tiers, and will not attempt to deduplicate files smaller than 32KB in size. On the other end of the spectrum, the largest contiguous region that will be matched is 4MB.

Because deduplication is a data efficiency product rather than performance enhancing tool, in most cases the consideration will be around cluster impact management. This is from both the client data access performance front, since, by design, multiple files will be sharing common data blocks, and also from the dedupe job execution perspective, as additional cluster resources are consumed to detect and share commonality.

The first deduplication job run will often take a substantial amount of time to run, since it must scan all files under the specified directories to generate the initial index and then create the appropriate shadow stores. However, deduplication job performance will typically improve significantly on the second and subsequent job runs (incrementals), once the initial index and the bulk of the shadow stores have already been created.

If incremental deduplication jobs do take a long time to complete, this is most likely indicative of a data set with a high rate of change. If a deduplication job is paused or interrupted, it will automatically resume the scanning process from where it left off.

As mentioned previously, deduplication is a long running process that involves multiple job phases that are run iteratively. SmartDedupe typically processes around 1TB of data per day, per node.

Deduplication can significantly increase the storage efficiency of data. However, the actual space savings will vary depending on the specific attributes of the data itself. As mentioned above, the deduplication assessment job can be run to help predict the likely space savings that deduplication would provide on a given data set.

For example, virtual machines files often contain duplicate data, much of which is rarely modified. Deduplicating similar OS type virtual machine images (VMware VMDK files, etc, that have been block-aligned) can significantly decrease the amount of storage space consumed. However, the potential for performance degradation as a result of block sharing and fragmentation should be carefully considered first.

OneFS SmartDedupe does not deduplicate across files that have different protection settings. For example, if two files share blocks, but file1 is parity protected at +2:1, and file2 has its protection set at +3, SmartDedupe will not attempt to deduplicate them. This ensures that all files and their constituent blocks are protected as configured.  Additionally, SmartDedupe won’t deduplicate files that are stored on different node pools. For example, if file1 and file2 are stored on tier 1 and tier 2 respectively, and tier1 and tier2 are both protected at 2:1, OneFS won’t deduplicate them. This helps guard against performance asynchronicity, where some of a file’s blocks could live on a different tier, or class of storage, than others.

OneFS performance resource management provides statistics for the resources used by jobs – both cluster-wide and per-node. This information is provided via the ‘isi statistics workload’ CLI command. Available in a ‘top’ format, this command displays the top jobs and processes, and periodically updates the information.

For example, the following syntax shows, and indefinitely refreshes, the top five processes on a cluster:

# isi statistics workload --limit 5 –format=top

last update:  2020-09-23T16:45:25 (s)ort: default

CPU  Reads Writes    L2   L3   Node SystemName      JobType

1.4s 9.1k 0.0        3.5k 497.0 2    Job:  237       IntegrityScan[0]

1.2s 85.7 714.7      4.9k 0.0  1    Job:  238       Dedupe[0]

1.2s 9.5k 0.0        3.5k 48.5 1    Job:  237       IntegrityScan[0]

1.2s 7.4k 541.3      4.9k 0.0  3    Job: 238        Dedupe[0]

1.1s 7.9k 0.0        3.5k 41.6 2    Job:  237       IntegrityScan[0]

From the output, we can see that two job engine jobs are in progress: Dedupe (job ID 238), which runs at low impact and priority level 4 is contending with IntegrityScan (job ID 237), which runs by default at medium impact and priority level 1.

The resource statistics tracked per job, per job phase, and per node include CPU, reads, writes, and L2 & L3 cache hits. Unlike the output from the ‘top’ command, this makes it easier to diagnose individual job resource issues, etc.

Below are some examples of typical space reclamation levels that have been achieved run SmartDedupe on various data types. Be aware though that these space savings values are provided solely as rough guidance. Since no two data sets are alike (unless they’re replicated), actual results can and will vary considerably from these examples.

Workflow / Data Type Typical Space Savings
Virtual Machine Data 35%
Home Directories / File Shares 25%
Email Archive 20%
Engineering Source Code 15%
Media Files 10%

SmartDedupe is included as a core component of OneFS but requires a valid product license key in order to activate. An unlicensed cluster will show a SmartDedupe warning until a valid product license has been applied to the cluster.

For optimal cluster performance, observing the following SmartDedupe best practices is recommended.

  • Deduplication is most effective when applied to data sets with a low rate of change – for example, archived data.
  • Enable SmartDedupe to run at subdirectory level(s) below /ifs.
  • Avoid adding more than ten subdirectory paths to the SmartDedupe configuration policy,
  • SmartDedupe is ideal for home directories, departmental file shares and warm and cold archive data sets.
  • Run SmartDedupe against a smaller sample data set first to evaluate performance impact versus space efficiency.
  • Schedule deduplication to run during the cluster’s low usage hours – i.e. overnight, weekends, etc.
  • After the initial dedupe job has completed, schedule incremental dedupe jobs to run every two weeks or so, depending on the size and rate of change of the dataset.
  • Always run SmartDedupe with the default ‘low’ impact Job Engine policy.
  • Run the dedupe assessment job on a single root directory at a time. If multiple directory paths are assessed in the same job, you will not be able to determine which directory should be deduplicated.
  • When replicating deduplicated data, to avoid running out of space on target, it is important to verify that the logical data size (i.e. the amount of storage space saved plus the actual storage space consumed) does not exceed the total available space on the target cluster.
  • Run a deduplication job on an appropriate data set prior to enabling a snapshots schedule.
  • Where possible, perform any snapshot restores (reverts) before running a deduplication job. And run a dedupe job directly after restoring a prior snapshot version.

With dedupe, there’s always trade-off between cluster resource consumption (CPU, memory, disk), the potential for data fragmentation and the benefit of increased space efficiency. Therefore, SmartDedupe is not ideally suited for high performance workloads.

  • Depending on an application’s I/O profile and the effect of deduplication on the data layout, read and write performance and overall space savings can vary considerably.
  • SmartDedupe will not permit block sharing across different hardware types or node pools to reduce the risk of performance asymmetry.
  • SmartDedupe will not share blocks across files with different protection policies applied.
  • OneFS metadata, including the deduplication index, is not deduplicated.
  • Deduplication is a long running process that involves multiple job phases that are run iteratively.
  • SmartDedupe will not attempt to deduplicate files smaller than 32KB in size.
  • Dedupe job performance will typically improve significantly on the second and subsequent job runs, once the initial index and the bulk of the shadow stores have already been created.
  • SmartDedupe will not deduplicate the data stored in a snapshot. However, snapshots can certainly be created of deduplicated data.
  • If deduplication is enabled on a cluster that already has a significant amount of data stored in snapshots, it will take time before the snapshot data is affected by deduplication. Newly created snapshots will contain deduplicated data, but older snapshots will not.
  • Any file on a cluster that is ‘un-deduped’ is automatically marked to ‘not re-dupe’. In order to reapply deduplicate to an un-deduped file, specific flags on the shadow store need to be cleared. For example:How to check the setting

    # isi get -D /ifs/data/test | grep -i dedupe

    *  Do not dedupe:      0

    Undedupe the file via isi_sstore :

    # isi_sstore undedupe /ifs/data/test

    Verify the setting:

    # isi get -D /ifs/data/test | grep -i dedupe

    *  Do not dedupe:      1

    ​​​​​​​If you want that file to participate in dedupe again then you need reset the “Do not dedupe” flag.

    How to reset the path.

    isi_sstore attr –no_dedupe=false <path>

SmartDedupe is one of several components of OneFS that enable OneFS to deliver a very high level of raw disk utilization. Another major storage efficiency attribute is the way that OneFS natively manages data protection in the file system. Unlike most file systems that rely on hardware RAID, OneFS protects data at the file level and, using software-based erasure coding, allows most customers to enjoy raw disk space utilization levels in the 80% range or higher. This is in contrast to the industry mean of around 50-60% raw disk capacity utilization. SmartDedupe serves to further extend this storage efficiency headroom, bringing an even more compelling and demonstrable TCO advantage to primary file based storage.

SmartDedupe post process dedupe is compatible with OneFS in-line data reduction (which we’ll cover in another blog post series) and vice versa. In-line compression is able to compress OneFS shadow stores. However, for SmartDedupe to process compressed data, the SmartDedupe job will have to decompress it first in order to perform deduplication, which is an addition resource overhead.

OneFS SmartDedupe – Monitoring & Management

As we saw in the previous article in this series, SmartDedupe operates at the directory level, targeting all files and directories underneath one or more root directories.

SmartDedupe not only deduplicates identical blocks in different files, it also matches and shares identical blocks within a single file. For two or more files to be deduplicated, the two following attributes must be the same:

  • Disk pool policy ID
  • Protection policy

If either of these attributes differs between two or more matching files, their common blocks will not be shared. SmartDedupe also does not deduplicate files that are less than 32 KB or smaller, because the resource consumption overhead outweighs the small storage efficiency benefit.

There are two principal elements to managing deduplication in OneFS. The first is the configuration of the SmartDedupe process itself. The second involves the scheduling and execution of the Dedupe job. These are both described below.

SmartDedupe works on data sets which are configured at the directory level, targeting all files and directories under each specified root directory. Multiple directory paths can be specified as part of the overall deduplication job configuration and scheduling.

Similarly, the dedupe directory paths can also be configured from the CLI via the isi dedupe settings modify command. For example, the following command targets /ifs/data and /ifs/home for deduplication:

# isi dedupe settings modify --paths /ifs/data, /ifs/home

Bear in mind that the permissions required to configure and modify deduplication settings are separate from those needed to run a deduplication job. For example, a user’s role must have job engine privileges to run a deduplication job. However, in order to configure and modify dedupe configuration settings, they must have the deduplication role privileges.

SmartDedupe can be run either on-demand (started manually) or via a predefined schedule. This is configured via the cluster management ‘Job Operations’ section of the WebUI.

The recommendation is to schedule and run deduplication during off-hours, when the rate of data change on the cluster is low. If clients are continually writing to files, the amount of space saved by deduplication will be minimal because the deduplicated blocks are constantly being removed from the shadow store.

To modify the parameters of the dedupe job itself, run the isi job types modify command. For example, the following command configures the deduplication job to be run every Saturday at 12:00 AM:

# isi job types modify Dedupe --schedule "Every Saturday at 12:00 AM"

For most clusters, after the initial deduplication job has completed, the recommendation is to run an incremental deduplication job once every two weeks.

The amount of disk space currently saved by SmartDedupe can be determined by viewing the cluster capacity usage chart and deduplication reports summary table in the WebUI. The cluster capacity chart and deduplication reports can be found by navigating to File System Management > Deduplication > Summary.

In addition to the bar chart and accompanying statistics (above), which graphically represents the data set and space efficiency in actual capacity terms, the dedupe job report overview field also displays the SmartDedupe savings as a percentage.

SmartDedupe space efficiency metrics are also provided via the ‘isi dedupe stats’ CLI command:

# isi dedupe stats

      Cluster Physical Size: 676.8841T

          Cluster Used Size: 236.3181T

  Logical Size Deduplicated: 29.2562T

             Logical Saving: 25.5125T

Estimated Size Deduplicated: 42.5774T

  Estimated Physical Saving: 37.1290T

In OneFS 8.2.1 and later, SmartQuotas has been enhanced to report the capacity saving from deduplication, and data reduction in general, as a storage efficiency ratio. SmartQuotas reports efficiency as a ratio across the desired data set as specified in the quota path field. The efficiency ratio is for the full quota directory and its contents, including any overhead, and reflects the net efficiency of compression and deduplication. On a cluster with licensed and configured SmartQuotas, this efficiency ratio can be easily viewed from the WebUI by navigating to ‘File System > SmartQuotas > Quotas and Usage’.

Similarly, the same data can be accessed from the OneFS command line via is ‘isi quota quotas list’ CLI command. For example:

# isi quota quotas list

Type      AppliesTo  Path           Snap  Hard  Soft  Adv  Used    Efficiency

-----------------------------------------------------------------------------

directory DEFAULT    /ifs           No    -     -     -    2.3247T 1.29 : 1

-----------------------------------------------------------------------------

Total: 1

More detail, including both the physical (raw) and logical (effective) data capacities, is also available via the ‘isi quota quotas view <path> <type>’ CLI command. For example:

# isi quota quotas view /ifs directory

                        Path: /ifs

                        Type: directory

                   Snapshots: No

 Thresholds Include Overhead: No

                       Usage

                           Files: 4245818

         Physical(With Overhead): 1.80T

           Logical(W/O Overhead): 2.33T

Efficiency(Logical/Physical): 1.29 : 1

…

To configure SmartQuotas for data efficiency reporting, create a directory quota at the top-level file system directory of interest, for example /ifs. Creating and configuring a directory quota is a simple procedure and can be performed from the WebUI, as follows:

Navigate to ‘File System > SmartQuotas > Quotas and Usage’ and select ‘Create a Quota’. In the create pane, field, set the Quota type to ‘Directory quota’, add the preferred top-level path to report on, select ‘File system logical size’ for Quota Accounting, and set the Quota Limits to ‘Track storage without specifying a storage limit’. Finally, select the ‘Create Quota’ button to confirm the configuration and activate the new directory quota.

The efficiency ratio is a single, current-in time efficiency metric that is calculated per quota directory and includes the sum of SmartDedupe plus in-line data reduction. This is in contrast to a history of stats over time, as reported in the ‘isi statistics data-reduction’ CLI command output, described above. As such, the efficiency ratio for the entire quota directory will reflect what is actually there. via the platform API as of OneFS 8.2.2.

The OneFS WebUI cluster dashboard also now displays a storage efficiency tile, which shows physical and logical space utilization histograms and reports the capacity saving from in-line data reduction as a storage efficiency ratio. This dashboard view is displayed by default when opening the OneFS WebUI in a browser and can be easily accessed by navigating to ‘File System > Dashboard > Cluster Overview’.

The Job Engine parallel execution framework provides comprehensive run time and completion reporting for the deduplication job.

Once the dedupe job has started working on a directory tree, the resulting space savings it achieves can be monitored in real time. While SmartDedupe is underway, job status is available at a glance via the progress column in the active jobs table. This information includes the number of files, directories and blocks that have been scanned, skipped and sampled, and any errors that may have been encountered.

Additional progress information is provided in an Active Job Details status update, which includes an estimated completion percentage based on the number of logical inodes (LINs) that have been counted and processed.

Once the SmartDedupe job has run to completion, or has been terminated, a full dedupe job report is available. This can be accessed from the WebUI by navigating to Cluster Management > Job Operations > Job Reports, and selecting ‘View Details’ action button on the desired Dedupe job line item.

The job report contains the following relevant dedupe metrics.

Report Field Description of Metric
Start time When the dedupe job started.
End time When the dedupe job finished.
Scanned blocks Total number of blocks scanned under configured path(s).
Sampled blocks Number of blocks that OneFS created index entries for.
Created dedupe requests Total number of dedupe requests created. A dedupe request gets created for each matching pair of data blocks. For example, three data blocks all match, two requests are created: One request to pair file1 and file2 together, the other request to pair file2 and file3 together.
Successful dedupe requests Number of dedupe requests that completed successfully.
Failed dedupe requests Number of dedupe requests that failed. If a dedupe request fails, it does not mean that the also job failed. A deduplication request can fail for any number of reasons. For example, the file might have been modified since it was sampled.

 

Skipped files Number of files that were not scanned by the deduplication job. The primary reason is that the file has already been scanned and hasn’t been modified since. Another reason for a file to be skipped is if it’s less than 32KB in size. Such files are considered too small and don’t provide enough space saving benefit to offset the fragmentation they will cause.
Index entries Number of entries that currently exist in the index.
Index lookup attempts Cumulative total number of lookups that have been done by prior and current deduplication jobs. A lookup is when the deduplication job attempts to match a block that has been indexed with a block that hasn’t been indexed.
Index lookup hits Total number of lookup hits that have been done by earlier deduplication jobs plus the number of lookup hits done by this deduplication job. A hit is a match of a sampled block with a block in index.

Dedupe job reports are also available from the CLI via the ‘ isi job reports view <job_id> ’ command.

From an execution and reporting stance, the Job Engine considers the ‘dedupe’ job to comprise of a single process or phase. The Job Engine events list will report that Dedupe Phase1 has ended and succeeded. This indicates that an entire SmartDedupe job, including all four internal dedupe phases (sampling, duplicate detection, block sharing, & index update), has successfully completed. For example:

# isi job events list --job-type dedupe

Time                Message

------------------------------------------------------

2020-09-01T13:39:32 Dedupe[1955] Running

2020-09-01T13:39:32 Dedupe[1955] Phase 1: begin dedupe

2020-09-01T14:20:32 Dedupe[1955] Phase 1: end dedupe

2020-09-01T14:20:32 Dedupe[1955] Phase 1: end dedupe

2020-09-01T14:20:32 Dedupe[1955] Succeeded

For deduplication reporting across multiple OneFS clusters, SmartConnect is also integrated with Isilon’s InsightIQ cluster reporting and analysis product. A report detailing the space savings delivered by deduplication is available via InsightIQ’s File Systems Analytics module.

Enable RFC2307 for OneFS and Active Directory

Windows Active Directory(AD) supports authenticate the Unix/Linux clients with the RFC2307 attributes ((e.g. GID/UID etc.). The Isilon OneFS is also RFC2307 compatible. So it is recommended to use Active Directory as the OneFS authentication provider to enable the centric identity management and authentication. This post will talk about the configurations to integrate AD and OneFS with RFC2307 compatible. In this post, Windows 2012R2 AD and OneFS 8.1.0 is used to show the process.

Prepare Windows 2012R2 AD for Unix/Linux

Unlike Windows 2008, Windows 2012 comes equipped with the UNIX attributes already loaded within the schema. And as of this release the Identity Services for UNIX feature has been deprecated, although still available until Windows 2016 the NIS and Psync services are not required.

The UI elements to configure RFC2307 attributes are not as nice as they were in 2008 since the IDMU MMC snap-in has also been depreciated. So we will install the IDMU component first to make it easier to configure the UID/GID attributes. With the following command, you can install the IDMU component in Windows 2012R2.

  • To install the administration tools for Identity Management for UNIX.
dism.exe /online /enable-feature /featurename:adminui /all
  • To install Server for NIS.
dism.exe /online /enable-feature /featurename:nis /all
  • To install Password Synchronization.
dism.exe /online /enable-feature /featurename:psync /all

After restarting the AD, you can see the UI element(UNIX Attributes) tab same as Windows 2008R2, shown as below. Now you can configure your AD users/groups to compatible with Unix/Linux environment. Recommended to configure the UID/GID to 10000 and above, meanwhile, do not overlap with the OneFS default auto-assign UID/GID range (1000000 – 2000000).

Configure the OneFS  Active Directory authentication provider to enable RFC2307

For mixed mode(Unix/Linux/Windows) authentication operations, there are several advanced options Active Directory authentication provider will need to be enabled.

  • Services for UNIX: rfc2307 – This leverages the Identity Management for UNIX services in the Active Directory schema
  • Auto-Assign UIDs: No – OneFS by default will generate pseudo UIDs for users it cannot match to SIDs this can cause potential user mapping issues.
  • Auto-Assign GIDs: No – OneFS by default will generate pseudo GIDs for groups it cannot match to SIDs as with the user mapping equally a group-mapping mismatch could occur.

You can do this configuration using both WebUI and CLI, with command isi auth ads modify EXAMPLE.LOCAL –sfu-support=rfc2307 –allocate-uids=false –allocate-gids=false. Or change the settings from the WebUI, shown below:

After the configurations above, the OneFS can use Active Directory as identity source for Unix/Linux client, and in this method, you can also simplify the identity management, as you have a centric identity source (AD) to be used for both Unix/Linux clients and Windows clients.

Configure SSH Multi-Factor Authentication on OneFS 8.2 Using Duo

SSH Multi-Factor Authentication (MFA) with Duo is a new feature introduced in OneFS 8.2. Currently, OneFS supports SSH MFA with Duo service through SMS (short message service), phone callback, and Push notification via the Duo app. This blog will cover the configuration to integrate OneFS SSH MFA with Duo service.

Duo provides service to many kinds of applications, like Microsoft Azure Active Directory, Cisco Webex, Amazon Web Services and etc. For an OneFS cluster, it is represented as a “Unix Application” entry.  To integrate OneFS with Duo service, configuration is required on Duo service and OneFS cluster. Before configuring OneFS with Duo, you need to have Duo account. In this blog, we used a trial version account for demonstration purposes.

Failback mode

By default, the SSH failback mode for Duo in OneFS is “safe”, which will allow common authentication if Duo service is not available. The “secure” mode will deny SSH access if Duo service is not available, including the bypass users, because the bypass users are defined and validated in the Duo service. To configure the failback mode in OneFS, specify –failmode  option using command isi auth duo modify .

Exclusion group

By default, all groups are required to use Duo unless the Duo group is configured to bypass Duo auth. The groups option allows you to exclude or specify dedicated user groups from using Duo service authentication. This method provides a way to configure users that can still SSH into the cluster even when the Duo service is not available and failback mode is set to “secure”. Otherwise, all users may be locked out of cluster in this situation.

To configure the exclusion group option, add an exclamation character “!” before the group name and preceded by an asterisk to ensure that all other groups use Duo service. An example is shown as below:

# isi auth duo modify --groups=”*,!groupname”

Note: zsh shell requires the “!” to be escaped. In this case, the example above should be changed to isi auth duo modify –groups=”*,\!groupname”

Prepare Duo service for OneFS

  1. Use your new Duo account to log into the Duo Admin Panel. Select the “Application” item from the left menu. And then click “Protect an Application”, Shown in Figure 1.
Figure 1 Protect an Application
  1. Type “Unix Application” in the search bar. Click “Protect this Application” to create a new Unix Application entry. See Figure 2.
Figure 2 Search for Unix Application
  1. Scroll down the creation page and find the “Settings” section. Type a name for the new Unix Application. It is recommended to use a name which can recognize your OneFS cluster, shown as Figure 3. In this section, you can also find the Duo’s name normalization setting. By default, Duo username normalization is not AD aware, it will alter incoming usernames before trying to match them to a user account. For example, “DOMAIN\username”, “username@domain.com“, and “username” are treated as the same user. For other options, refer to here.
Figure 3 Unix Application Name
  1. Check the required information for OneFS under “Details” section, including API hostnameintegration key, and secret key. Shown as Figure 4
Figure 4 Required Information for OneFS
  1. Manually enroll a user. In this example, we will create a user named “admin” which is the default OneFS administrator user. Switch the menu item to “Users” and click “Add User” button, shown as Figure 5. For details about user enrollment on Duo service, refer to Duo documentation Enrolling Users.
Figure 5 User Enrollment
  1. Type the user name, shown as Figure 6.
Figure 6 Manually User Enrollment
  1. Find the “Phones” settings in the user page and click “Add Phone” button to add a device for the user. Shown in Figure 7.
Figure 7 Add Phone for User
  1. Type your phone number.
Figure 8 Add New Phone
  1. (optional) If you want to use Duo push authentication methods, you need to install Duo Mobile app in the phone and activate the Duo Mobile. As highlighted in Figure 9, click the link to activate the Duo Mobile.
Figure 9 Activate Duo Mobile

OneFS Configuration and Verification

  1. By default, the authentication setting template is set for “any”. To use OneFS with Duo service, the authentication setting template must not be set to “any” or “custom”. It should be set to “password”, “publickey”, or “both”. In this example, we configure the setting to “password”, which will use user password and Duo for SSH MFA. Shown as the following command:
# isi ssh modify --auth-settings-template=password
  1. Confirm the authentication method using the following command:
# isi ssh settings view| grep "Auth Settings Template"
      Auth Settings Template: password
  1. Configure required Duo service information and enable it for SSH MFA, shown as below, use the information when we set up Unix Application in Duo, including API hostname, integration key, and secret key.
# isi auth duo modify --enabled=true --failmode=safe --host=api-13b1ee8c.duosecurity.com --ikey=DIRHW4IRSC7Q4R1YQ3CQ --set-skey

Enter skey:

Confirm:
  1. Verify SSH MFA using the user “admin”. An SMS passcode and user’s password are used for authentication in this example, shown as Figure 10.
Figure 10 SSH MFA Verification