Isilon OneFS and Hadoop Known Issues

The following are known issues that exist with OneFS and Hadoop HDFS integrations:

Oozie sharedlib deployment fails with Isilon

The deployment of the oozie shared libraries fails on Ambari 2.7/HDP 3.x against Isilon.

oozie makes a rpc check for erasure encoding when deploying the shared libararies, OneFS doesn’t support HDFS erasure encoding as OneFS is natively using its own Erasure Encoding for data protection and the call fails with poor handling on the oozie side of the code, this causes a failure in the deployment of the shared lib.

[root@centos-01 ~]# /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs hdfs://hdp-27.foo.com:8020 -locallib /usr/hdp/3.0.1.0-187/oozie/libserver

  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-server/conf}

  setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}

  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}

  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat

  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112

  setting JRE_HOME=${JAVA_HOME}

  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"

  setting OOZIE_LOG=/var/log/oozie

  setting CATALINA_PID=/var/run/oozie/oozie.pid

  setting OOZIE_DATA=/hadoop/oozie/data

  setting OOZIE_HTTP_PORT=11000

  setting OOZIE_ADMIN_PORT=11001

  setting JAVA_LIBRARY_PATH=/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64

  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 "

  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-server/conf}

  setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}

  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}

  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat

  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112

  setting JRE_HOME=${JAVA_HOME}

  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"

  setting OOZIE_LOG=/var/log/oozie

  setting CATALINA_PID=/var/run/oozie/oozie.pid

  setting OOZIE_DATA=/hadoop/oozie/data

  setting OOZIE_HTTP_PORT=11000

  setting OOZIE_ADMIN_PORT=11001

  setting JAVA_LIBRARY_PATH=/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64

  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 "

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]

3138 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

4193 [main] INFO org.apache.hadoop.security.UserGroupInformation - Login successful for user oozie/centos-01.foo.com@FOO.COM using keytab file /etc/security/keytabs/oozie.service.keytab

4436 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir

4490 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating Configuration

log4j:WARN No appenders could be found for logger (org.apache.htrace.core.Tracer).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Found Hadoop that supports Erasure Coding. Trying to disable Erasure Coding for path: /user/root/share/lib

Error invoking method with reflection





Error: java.lang.reflect.InvocationTargetException

Stack trace for the error was (for debug purposes):

java.lang.RuntimeException: java.lang.reflect.InvocationTargetException

        at org.apache.oozie.tools.ECPolicyDisabler.invokeMethod(ECPolicyDisabler.java:111)

        at org.apache.oozie.tools.ECPolicyDisabler.tryDisableECPolicyForPath(ECPolicyDisabler.java:47)

        at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:171)

        at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:67)

Caused by: java.lang.reflect.InvocationTargetException

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.apache.oozie.tools.ECPolicyDisabler.invokeMethod(ECPolicyDisabler.java:108)

        ... 3 more

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): Unknown RPC: getErasureCodingPolicy

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)

        at org.apache.hadoop.ipc.Client.call(Client.java:1443)

        at org.apache.hadoop.ipc.Client.call(Client.java:1353)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)

        at com.sun.proxy.$Proxy9.getErasureCodingPolicy(Unknown Source)

        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getErasureCodingPolicy(ClientNamenodeProtocolTranslatorPB.java:1892)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)

        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)

        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)

        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)

        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)

        at com.sun.proxy.$Proxy10.getErasureCodingPolicy(Unknown Source)

        at org.apache.hadoop.hdfs.DFSClient.getErasureCodingPolicy(DFSClient.java:3082)

        at org.apache.hadoop.hdfs.DistributedFileSystem$66.doCall(DistributedFileSystem.java:2884)

        at org.apache.hadoop.hdfs.DistributedFileSystem$66.doCall(DistributedFileSystem.java:2881)

        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

        at org.apache.hadoop.hdfs.DistributedFileSystem.getErasureCodingPolicy(DistributedFileSystem.java:2898)

        ... 8 more
A workaround is a manual copy and unpack of the oozie-sharelib.tar.gz to the /user/oozie/share/lib

Cloudera BDR integration with Cloudera Manager Based Isilon Integration

Cloudera CDH with BDR is no longer supported with Isilon, CDH fails to integrate BDR completely with a Cloudera Manager based Isilon cluster.

Upgrading Ambari 2.6.5 to 2.7 – setfacl issue with Hive

Per the following procedure: http://www.unstructureddatatips.com/upgrade-hortonworks-hdp2-6-5-to-hdp3-on-dellemc-isilon-onefs-8-1-2-and-later/

When upgrading from Ambari 2.6.5 to 2.7, if the Hive Service is installed the following must be completed prior to upgrade otherwise the upgrade process will stall with an Unknown RPC issue as seen below.

 

The Isilon OneFS HDFS service does not support the HDFS ACL’s and the resulting setfacl will cause the upgrade to stall.

Add the following property: dfs.namenode.acls.enabled=false to the custom hdfs-site prior to upgrading and this will prevent the upgrade attempting to use setfacl.

Restart any services that need restarting

Execute the upgrade per the procedure and the Hive setfacl issue will not be encountered.

Additional Upgrade issue you may see:

– Error mapping uname \’yarn-ats\’ to uid (created yarn-ats user: isi auth users create yarn-ats –zone=<hdfs zone>)

– MySQL Dependency error (execute: ambari-server setup –jdbc-db=mysql –jdbc-driver=/usr/share/java/mysql-connector-java.jar)

– Ambari Metrics restart issue Reference: http://www.ryanchapin.com/fv-b-4-818/-SOLVED–Unable-to-Connect-to-ambari-metrics-collector-Issues.html

 

OneFS 8.2 Local Service Accounts need to be ENABLED

With the release of OneFS 8.2 a number of changes were made in the identity management stack, one modification that is required on 8.2 is that local accounts need to be in the enabled state to be used for identity, in prior version local account ID’s could be used with the local account disabled.

In 8.2 all local accounts must be ENABLED to be used for ID management by OneFS, this is required:

In 8.1.2 and prior, local accounts were functional when disabled

On upgrade to 8.2

  • All accounts should be set the ‘enabled state’
  • Enable all accounts prior to upgrade

The latest version of the create_users script on  the isilon_hadoop_tools github will now create enabled users by default

Enabling account does not make this account interactive logon aware they are still just ID’s used by Isilon for HDFS ID management.

 

Support for HDP 3.1 with the Isilon Management Pack 1.0.1.0

With the release of the Isilon Management Pack 1.0.1.0 support for HDP 3.1 is included, the procedure to upgrade the mpack is listed here if mpack 1.0.0.1 was installed with HDP 3.0.1.

Before upgrading the mpack the following KB should be consulted to assess the status of the Kerberized Spark2 services and if modifications were made to 3.0.1 installs were made in Ambari: Isilon: Spark2 fails to start after Kerberization with HDP 3 and OneFS due to missing configurations

Upgrade the Isilon Ambari Management Pack

  1. Download the Isilon Ambari Management Pack
  2. Install the management pack by running the following commands on the
    Ambari server:
    
    ambari-server upgrade-mpack –-mpack = <path-to-new-mpack.tar.gz> -verbose
    
    ambari-server restart

     

How to determine the Isilon Ambari Management Pack version

On the Ambari server host run the following command:

ls /var/lib/ambari-server/resources/mpacks | grep “onefs-ambari-mpack-”

The output will appear similar to this, where x.x.x.x indicates which version of the IAMP is currently installed:

onefs-ambari-mpack-x.x.x.x

How to find the README in Isilon Ambari Management Pack 1.0.1.0

Download the Isilon Ambari Management Pack

  1. Run the following command to extract the contents:
    • tar -zxvf isilon-onefs-mpack-1.0.1.0.tar.gz
  2. The README is located under isilon-onefs-mpack-1.0.1.0/addon-services/ONEFS/1.0.0/support/README
  3. Please review the README for release information.

 

The release of OneFS 8.2 brings changes to Hadoop Cluster Deployment and Setup

Prior to 8.2, the following two configurations were required to support Hadoop cluster

  1. Modification to the Access Control List Policy setting for OneFS is no longer needed

We used to run ‘isi auth settings acls modify –group-owner-inheritance=parent’  to make the OneFS file system act like an HDFS file system, this was a global setting and affected the whole cluster and other workflows. In 8.2 this is no longer needed, hdfs operation act like this natively so the setting is no longer required. Do not run this command on the setup of hdfs of new 8.2 clusters, if this was previously set on 8.1.2 and prior it is suggested to leave the setting as is because modifying it can affect other workflows.

  1. hdfs to root mappings is not needed – replaced by RBAC

Prior to 8.2 hdfs => root mappings were required to facilitate the behavior of the hdfs account, in 8.2 this root mapping has been replaced with an RBAC privilege, no root mapping is needed and instead the following RBAC role with the specified privileges should be created, add any account needing this access.

isi auth roles create --name=hdfs_access --description="Bypass FS permissions" --zone=System
isi auth roles modify hdfs_access --add-priv=ISI_PRIV_IFS_RESTORE --zone=System
isi auth roles modify hdfs_access --add-priv=ISI_PRIV_IFS_BACKUP --zone=System
isi auth roles modify hdfs_access --add-user=hdfs --zone=System
isi auth roles view hdfs_access --zone=System
isi_for_array "isi auth mapping flush --all"
isi_for_array "isi auth cache flush --all"

 

The installation guides will reflect these changes shortly.

Summary:

8.1.2 and Earlier:

hdfs=>root mapping

ACL Policy Change Needed

8.2 and Later

RBAC role for hdfs

No ACL Policy Change

 

When using Ambari 2.7 and the Isilon Management Pack, the following is seen in the Isilon hdfs.log:

isilon-3: 2019-05-14T14:34:06-04:00 <30.4> isilon-3 hdfs[95183]: [hdfs] Ambari: Agent for zone 12 got a bad exit code from its Ambari server. The agent will attempt to recover.

isilon-3: 2019-05-14T14:34:06-04:00 <30.6> isilon-3 hdfs[95183]: [hdfs] Ambari: The Ambari server for zone 12 is running a version unsupported by OneFS: 2.7.1.0. Agent will reset and retry until a supported Ambari server version is installed or Ambari is no longer enabled for this zone

isilon-3: 2019-05-14T14:35:12-04:00 <30.4> isilon-3 hdfs[95183]: [hdfs] Ambari: Agent for zone 12 got a bad exit code from its Ambari server. The agent will attempt to recover.

isilon-3: 2019-05-14T14:35:12-04:00 <30.6> isilon-3 hdfs[95183]: [hdfs] Ambari: The Ambari server for zone 12 is running a version unsupported by OneFS: 2.7.1.0. Agent will reset and retry until a supported Ambari server version is installed or Ambari is no longer enabled for this zone

isilon-3: 2019-05-14T14:36:17-04:00 <30.4> isilon-3 hdfs[95183]: [hdfs] Ambari: Agent for zone 12 got a bad exit code from its Ambari server. The agent will attempt to recover.

isilon-3: 2019-05-14T14:36:17-04:00 <30.6> isilon-3 hdfs[95183]: [hdfs] Ambari: The Ambari server for zone 12 is running a version unsupported by OneFS: 2.7.1.0. Agent will reset and retry until a supported Ambari server version is installed or Ambari is no longer enabled for this zone

When using Ambari with the Isilon Management Pack, Isilon should not be configured with an Ambari Server or ODP version as they are no longer needed since the Management Pack is in use.

If they have been added, remove them from the Isilon hdfs configuration for the zone in question, this only applied to Ambari 2.7 with the Isilon Management Pack, Ambari 2.6 and earlier still require these settings.

isilon-1# isi hdfs settings view --zone=zone-hdp27

Service: Yes

Default Block Size: 128M

Default Checksum Type: none

Authentication Mode: kerberos_only

Root Directory: /ifs/zone/hdp27/hadoop-root

WebHDFS Enabled: Yes

           Ambari Server: -

Ambari Namenode: hdp-27.foo.com

       Odp Version: -

Data Transfer Cipher: none

Ambari Metrics Collector: centos-01.foo.com

 

Ambari sees LDAPS issue connecting to AD during Kerberization

05 Apr 2018 20:05:14,081 ERROR [ambari-client-thread-38] KerberosHelperImpl:2379 - Cannot validate credentials: org.apache.ambari.server.serveraction.kerberos.KerberosInvalidConfigurationException: Failed to connect to KDC - Failed to communicate with the Active Directory at ldaps://rduvnode217745.west.isilon.com/DC=AMB3,DC=COM: simple bind failed: rduvnode217745.west.isilon.com:636

Make sure the server’s SSL certificate or CA certificates have been imported into Ambari’s truststore.

 

Review the following KB from Hortonworks on resolving this Ambari Issue:

https://community.hortonworks.com/content/supportkb/148572/failed-to-connect-to-kdc-make-sure-the-servers-ssl.html

 

HDFS rollup patch for 8.1.2 – Patch-240163:

Patch for OneFS 8.1.2.0. This patch addresses issues with the Hadoop Distributed File System (HDFS).

********************************************************************************

This patch can be installed on clusters running the following OneFS version:

8.1.2.0

This patch deprecates the following patch:

Patch-236288

 

This patch conflicts with the following patches:

Patch-237113

Patch-237483

 

If any conflicting or deprecated patches are installed on the cluster, you must

remove them before installing this patch.

********************************************************************************

RESOLVED ISSUES

 

* Bug ID 240177

The Hadoop Distributed File System (HDFS) rename command did not manage file

handles correctly and might have caused data unavailability with

STATUS_TOO_MANY_OPEN_FILES error.

 

* Bug ID 236286

If a OneFS cluster had the Hadoop Distributed File System (HDFS) configured for Kerberos authentication, WebHDFS requests over curl might have failed to follow a redirect request.

 

 

WebHDFS issue with Kerberized 8.1.2 – curl requests fail to follow redirects; Service Checks and Ambari Views will fail

 

Isilon HDFS error: STATUS_TOO_MANY_OPENED_FILES causes jobs to fail

 

OneFS 8.0.0.X and Cloudera Impala 5.12.X: Impala queries fail with `WARNINGS: TableLoadingException: Failed to load metadata for table: <tablename> , CAUSED BY: IllegalStateException: null`

 

Ambari agent fails to send heartbeats to Ambari server if agent is running on a NANON node

NameNode gives out any IP addresses in an access zone, even across pools and subnets; client connection may fail as a result

Other Known Issues

  1. Host Registrations fails on RHEL 7 hosts with opensslissues

Modify the ambari-agent.ini file:

/etc/ambari-agent/conf/ambari-agent.ini

[security]

force_https_protocol=PROTOCOL_TLSv1_2

 

Restart the ambari-server and all ambari-agents

https://community.hortonworks.com/questions/145/openssl-error-upon-host-registration.html

 

OneFS 9.0.0 the services are now disabled by default

Check the service status using isi sevrices -a

hop-ps-a-3# isi services -a
Available Services:    
apache2              Apache2 Web Server                       Enabled 
auth                 Authentication Service                   Enabled  
celog                Cluster Event Log                        Enabled connectemc           ConnectEMC Service                       Disabled 
cron                 System cron Daemon                       Enabled dell_dcism           Dell iDRAC Service Module                Enabled dell_powertools      Dell PowerTools Agent Daemon             Enabled 
dmilog               DMI log monitor                          Enabled  
gmond                Ganglia node monitor                     Disabled  
hdfs                 HDFS Server                              Disabled 

Enable the hdfs service manually to get  going with Hadoop cluster access from hdfs client.

Upgrade Hortonworks HDP2.6.5 to HDP3.* on DellEMC Isilon OneFS 8.1.2 and later

Introduction

This blog post walks you through the process of upgrading  Hortonworks Data Platform (HDP) 2.6.5 to HDP 3.0.1 or HDP3.1.0  on DellEMC Isilon OneFS 8.1.2/OneFS 8.2 This is intended for systems administrators, IT program managers, IT architects, and IT managers who are upgrading Hortonworks Data Platform installed on OneFS 8.1.2.0. or later versions

There are two official ways to upgrade to HDP 3.* as follows:

    1. Deploy a fresh HDP 3.* cluster and migrate existing data using Data Lifecycle Manager or distributed copy (distcp).
    2. Perform an in-place upgrade of an existing HDP 2.6.x cluster.

This post will demonstrate in-place upgrades. Make sure your cluster is ready and meets all the success criteria as mentioned here and in the official Hortonworks Upgrade documentation.

The installation or upgrade process of the new HDP 3.0.1 and later versions on Isilon OneFS 8.1.2 and later versions has changed as follows:

The OneFS is not presented as a host to the HDP cluster anymore, and instead, OneFS is internally managed as a dedicated service in place of HDFS by installing a management pack called the Ambari Management Pack for Isilon OneFS. It is a software component that can be installed on the Ambari Server to define OneFS as a service in a Hadoop cluster. The management pack allows an Ambari administrator to start, stop, and configure OneFS as a HDFS storage service. This provides native NameNode and DataNode capabilities similar to traditional HDFS.

This management pack is OneFS release-independent and can be updated in between releases if needed.

Prerequisites

    1. Hadoop cluster running HDP 2.6.5 and Ambari Server 2.6.2.2.
    2. DellEMC Isilon OneFS updated to 8.1.2 and patch 240163 installed.
    3. Ambari Management Pack for Isilon OneFS download fromhere.
    4. HDFS to OneFS Service converter script download from here.

We will perform the upgrade in two parts: first we will make the changes on the OneFS host and followed by updates on the HDP cluster.

OneFS Host Preparation

The step-by-step process to prepare the OneFS host for the HDP upgrade is as follows:.

    1. First make sure the Isilon OneFS cluster is running 8.1.2 installed with the latest patch available. Check DellEMC support or Current Isilon OneFS Patches

  1. HDP 3.0.1 comes with TLSv2.0 service which relies on the yarn-ats user and a dedicated HBase storage in the back-end for Yarn apps and jobs framework metrics collections. For this, we  create two new users yarn-ats and yarn-ats-hbase on the OneFS host.

Login to the Isilon OneFS terminal node using root credentials, and run the following commands:

isi auth group create yarn-ats
isi auth users create yarn-ats --primary-group yarn-ats --home-directory=/ifs/home/yarn-ats
isi auth group create yarn-ats-hbase
isi auth users create yarn-ats-hbase --primary-group yarn-ats-hbase --home-directory=/ifs/home/yarn-ats-hbase
  1. Once the new users are created, you need to map yarn-ats-hbase to yarn-ats on the OneFS host. This step is required only if you are going to secure the HDP cluster with Kerberization.
isi zone modify --add-user-mapping-rules="yarn-ats-hbase=>yarn-ats" –-zone=ZONE_NAME

This user mapping depends on the mode of Timeline Service 2.0 Installation. Read those instructions carefully and opt for the deployment mode to avoid ats-hbase service failure.

You can skip the yarn-ats-hbase to yarn-ats user mapping in the following two cases:

    • Renaming yarn-ats-hbase principals to yarn-ats during Kerberization if Timeline Service V2.0s are deployed in Embedded or System Service mode.
    • There is no need to set user mapping if TLSv2.0 is configured on external Hbase.

HDP Cluster preparation and upgrade

Follow the steps as documented. The steps  must to meet all of the prerequisites in the Hortonworks upgrade document.

  1. Before starting the process, make sure the HDP 2.6.5 cluster is healthy by doing a service check, and address all of the alerts, if any display.

  1. Now stop the HDFS service and all other components running on the OneFS host.

  1. Delete the Datanode/Namenode/SNamenode using the following curl command:

Note that before DN/NN and SNN are deleted, you’ll see something like the following:

Use the following curl commands to delete the DN, NN and SNN:

export AMBARI_SERVER=<Ambar server IP/FQDN>
export CLUSTER=<HDP2.6.5 cluster name>
export HOST=<OneFS host FQDN>
curl -u admin:admin -H "X-Requested-By: Ambari" -X DELETE http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER/hosts/$HOST/host_components/DATANODE
curl -u admin:admin -H "X-Requested-By: Ambari" -X DELETE http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER/hosts/$HOST/host_components/NAMENODE
curl -u admin:admin -H "X-Requested-By: Ambari" -X DELETE http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER/hosts/$HOST/host_components/SECONDARY_NAMENODE

After the deleting DN/NN and SNN, you’ll see something similar to the following:

  1. Manually delete the OneFS host from the Ambari Server UI.

Following the steps from five to nine are critical and are related to the Hortonworks HDP upgrade process. Refer to the Hortonworks upgrade documentations or consult the Hortonworks support if necessary.

Note: Steps five to nine in the HDP upgrade process below are related to the services running on our POC cluster. You’ll have to do backup, migration, upgrades to the necessary service as described in the Hortonworks documentation before going to  step 10.

———-

  1. Upgrade Ambari Server/agent to 2.7.1, by follow the Hortonworks Ambari Server upgrade document.

  1. Register and install HDP 3.0.1, by following the steps in this Hortonworks HDP register and install target version guide.
  2. Upgrade Ambari metrics, by following the steps in this upgrade ambari metrics guide
  3. Note: This next step is critical: Perform a service check on all the services and make sure to address all  alerts if any.
  4. Click upgrade and complete the upgrade process. Address any issues encountered before proceeding to avoid service failures after finalizing the upgrade.

A screen similar to the following displays:

———–

After the successful upgrade to HDP 3.0.1, continue installing Ambari Management pack for Isilon OneFS on the upgraded Ambari Server.
  1. For the Ambari Server Management Pack installation, login to the Ambari Server terminal, download the management pack, install, and then restart the Ambari server.

a. Download the Ambari Management Pack for Isilon OneFS from here

b. Install the management pack as shown below. Once it is installed, the following displays: Ambari Server ‘install-mpack’ completed successfully.

root@RDUVNODE334518:~ # ambari-server install-mpack --mpack=isilon-onefs-mpack-0.1.0.0.tar.gz --verbose
Using python /usr/bin/python
Installing management pack
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Installing management pack isilon-onefs-mpack-0.1.0.0-SNAPSHOT.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Download management pack to temp location /var/lib/ambari-server/data/tmp/isilon-onefs-mpack-0.1.0.0-SNAPSHOT.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Expand management pack at temp location /var/lib/ambari-server/data/tmp/isilon-onefs-mpack-0.1.0.0-SNAPSHOT/
2018-11-07 06:36:39,137 - Execute[('tar', '-xf', '/var/lib/ambari-server/data/tmp/isilon-onefs-mpack-0.1.0.0-SNAPSHOT.tar.gz', '-C', '/var/lib/ambari-server/data/tmp/')] {'tries': 3, 'sudo': True, 'try_sleep': 1}
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Stage management pack onefs-ambari-mpack-0.1 to staging location /var/lib/ambari-server/resources/mpacks/onefs-ambari-mpack-0.1
INFO: Processing artifact ONEFS-addon-services of type stack-addon-service-definitions in /var/lib/ambari-server/resources/mpacks/onefs-ambari-mpack-0.1/addon-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Adjusting file permissions and ownerships
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/stacks
INFO:
process_pid=28352
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/stacks
INFO:
process_pid=28353
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/extensions
INFO:
process_pid=28354
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/extensions
INFO:
process_pid=28355
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/common-services
INFO:
process_pid=28356
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/common-services
INFO:
process_pid=28357
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/mpacks
INFO:
process_pid=28358
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/mpacks
INFO:
process_pid=28359
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/mpacks/cache
INFO:
process_pid=28360
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/mpacks/cache
INFO:
process_pid=28361
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/dashboards
INFO:
process_pid=28362
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/dashboards
INFO:
process_pid=28363
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/stacks
INFO:
process_pid=28364
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/extensions
INFO:
process_pid=28365
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/common-services
INFO:
process_pid=28366
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/mpacks
INFO:
process_pid=28367
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/mpacks/cache
INFO:
process_pid=28368
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/dashboards
INFO:
process_pid=28369
INFO: Management pack onefs-ambari-mpack-0.1 successfully installed! Please restart ambari-server.
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
Ambari Server 'install-mpack' completed successfully.

c. Restart the Ambari Server.

root@RDUVNODE334518:~ # ambari-server restart
Using python /usr/bin/python
Restarting ambari-server
Waiting for server stop...
Ambari Server stopped
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start................
Server started listening on 8080

DB configs consistency check: no errors and warnings were found.

 

  1. Replace the HDFS service with the OneFS service; the management pack installed contains OneFS Service related settings.

For this step, delete the HDFS service, add the OneFS service installed from the Ambari Management Pack above, and copy the HDFS service configuration into the OneFS service.

a. To delete HDFS, add the OneFS service, and copy the configuration, you have an automation tool “hdfs_to_onefs_convertor.py”.

Login to the Ambari Server terminal and download the script from here.

wget --no-check-certificate https://raw.githubusercontent.com/apache/ambari/trunk/contrib/management-packs/isilon-onefs-mpack/src/main/tools/hdfs_to_onefs_convert.py

b. Now run the script by issuing the Ambari server and cluster name as the parameters. Once it completes, you see all the services up and running.

root@RDUVNODE334518:~ # python hdfs_to_onefs_convertor.py -o 'RDUVNODE334518.west.isilon.com' -c 'hdpupgd'
This script will replace the HDFS service to ONEFS
The following prerequisites are required:
* ONEFS management package must be installed
* Ambari must be upgraded to >=v2.7.0
* Stack must be upgraded to HDP-3.0
* Is highly recommended to backup ambari database before you proceed.
Checking Cluster: hdpupgd (http://RDUVNODE334518.west.isilon.com:8080/api/v1/clusters/hdpupgd)
Found stack HDP-3.0
Please, confirm you have made backup of the Ambari db [y/n] (n)? y
Collecting hosts with HDFS_CLIENT
Found hosts [u'rduvnode334518.west.isilon.com']
Stopping all services..
Downloading core-site..
Downloading hdfs-site..
Downloading hadoop-env..
Deleting HDFS..
Adding ONEFS..
Adding ONEFS config..
Adding core-site
Adding hdfs-site
Adding hadoop-env-site
Adding ONEFS_CLIENT to hosts: [u'rduvnode334518.west.isilon.com']
Starting all services..
root@RDUVNODE334518:~ #


  1. At this point, you have successfully upgraded to HDP 3.0.1 and replaced the HDFS service with the OneFS service. From now on, Isilon OneFS only acts as an HDFS storage layer, so you can remove the Ambari Server and ODP Version settings from the Isilon’s HDFS settings as follows:
kbhusan-y93o5ew-1# isi hdfs settings modify --zone=System --odp-version=
kbhusan-y93o5ew-1# isi hdfs settings modify --zone=System --ambari-server=
kbhusan-y93o5ew-1# isi hdfs settings view
Service: Yes
Default Block Size: 128M
Default Checksum Type: none
Authentication Mode: all
Root Directory: /ifs/hdfs-root
WebHDFS Enabled: Yes
Ambari Server: -
Ambari Namenode: kb-hdp-1.west.isilon.com
Odp Version: -
Data Transfer Cipher: none
Ambari Metrics Collector: kb-hdp-1.west.isilon.com
kbhusan-y93o5ew-1#


13. Login into the Ambari Web UI and check the OneFS service and its configuration. Perform the service check.

A screen similar to the following displays:

Review the results:

Summary

In this blog, we demonstrated how you can successfully upgrade the Apache Ambari Server/agents to 2.7.1 and Hortonworks HDP 2.6.5 to HDP 3.0.1 on DellEMC Isilon OneFS 8.1.2 installed with the latest patch available. The same steps apply to upgrading the later versions of HDP3.0.1.

We installed Ambari server Management Pack for DellEMC Isilon OneFS which replaced the HDFS service to the OneFS service. This enables Ambari administrator to start, stop, and configure OneFS as a HDFS storage service, and this also provides native NameNode and DataNode capabilities like traditional HDFS to DellEMC Isilon OneFS.

 

 

Setting Up Share Host ACLs Isilon OneFS

Setting Up Share Host ACLs

How do you allow or deny host for SMB shares?

In Isilon’s OneFS administrators can set Host ACLs on SMB shares. Setting up theses ACLs can add an extra layer of security for files in a specific share. For example administrators can deny all traffic except from certain servers.

OneFS Setting Up Share Host ACLs Commands

Below are the commands used in the Setting Up Share Host ACLs demo. NASA refers to the SMB Share used deny all traffic except from the specific host or hosts.

List out all the shares specific zone

isi smb shares list

View specifics on particular share in access zone

isi smb shares view nasa

Modify Host ACLs on particular share in access zone

isi smb share modify nasa --add-acl

Clear Host ACLs on specific share

isi smb share modify nasa --clear-host-acl
or 
isi smb share modify nasa --revert-host-acl

 

Video – Setting Up Host ACLs on Isilon File Share

Transcript

 

Hi, folks. Thomas Henson here with thomashenson.com. And today is another episode of Isilon Quick Tips. So, what we want to cover on today’s episode is I want to go in through the CLI, and look at some of the commands that we can do on isi shares. And specifically, I want to look at some of the advanced features. So, something around the ACLs where we can deny certain hosts or allow certain hosts, too. So, follow along with me right after this. [Music]. So, in today’s episode we want to look at SMB Shares, but specifically from the Command Line. What we’re really going to focus on as I open this Share here is some of these advanced settings. So, you can see that we have some of these advanced settings, like continuous availability of time. And it looks like that we can change some of these. But when we change them, we’re just going to type in how we want to change those here. So, if you wanted to, for example in the host ACL, be able to deny or allow certain hosts, this is where we can do that. But let’s find out how we can this from the Command Line. Because there is a couple of different options, and a couple ways we can do it, and specifically we want to learn how to do it from the Command Line. So, here we are. I’m log back in to my Command Line. So, you can see I’m on Isilon-2. So, the first command I want to do is I want to list out all those SMB Shares that we had. So, we had three of those. So, the command is that we’re going to use in is the smb shares. And I’m just going to type return, so we can see what those actions are. So, you can see that we can do a list, which is the first thing we want to do. But you can also create those shares, you can delete shares, and we can view specific properties on each one of those shares. So, going back in. Let’s run a list on our shares. And you can see… All right. So, we have all those shares that we were just looking at from our [INAUDIBLE 00:02:00]. One thing to note here is if you are using this shares list command and you don’t see your zones, make sure that you type in the zone here. So, we will type in a specific zone. So, if you didn’t see the shares, make sure that you’re specifying exactly what zone there is. I only have one zone in my lab environment here on the system, so I can see that all may shares are there. So, now that I know my shares are there, let’s go back. I want to look at the nasa share that we have. So, let’s use the view command NASA. And you can see here that it’s going to give me my permissions, but then also those advanced features that we were talking about, we can see those here. So, for example we have the Access Based Enumeration. So, if you’re looking to be able to hide files or folders for users that don’t have those permissions, you can see that if that set here. Then also the File Mask. So, you can see that on default directly in File Mask is 700. So, if you’re looking about [INAUDIBLE 00:02:54] the File Mask is, if you’re not familiar, that’s the default permissions that are set whenever you have a File Directory that’s created in this share. So, you can see that in mine, the default setting is 700. Then specifically, the one that I really want to go over was the Host ACL. So, you can see the Hos ACL. I don’t have anything set here. And this is the property we can change, that will allow or deny certain hosts to the specific share. So, one of the reasons this came up is we were trying to secure an application from a share, and we wanted to able to say, ͞Hey, it’s only going to accept traffic from two or one specific server, and then we’re going to deny all those.͟ So, what we’re going to do is I want to walk through how to do that. So specifically, we’re still going to use our isismb share. But now we’re going to use the modify. So, you see the isi smb share modify command. You can see that when we do that… I’m just going to show you some of the commands that we have here. But you can see we have a lot of different options we can do. But the first thing is, remember, we’re going to type in that share.

So, here I want to pass in my nasa string. I don’t have to pass in zone, because I only have one zone. But if you have different zones, then you’re going to want to pass that zone in. The command that we’re specifically looking for is this host-acl. So, we have some options here with the host-acl. We can clear the host, we can add a host, and we can remove a host. So, what we want to do is we want to add a host that’s going to allow for host coming from. We’re just going to say 192.170.170.001. Then we’re going to deny our host from that. So, we’re going to clear this out, so we can have that at the top of the screen. So, you can see we have it here. So, that isi smb shares modify. Then you’re going to put in here you share name. So, mine is nasa. And we’re going to do –add-host-acl=, the first thing that we’re going to do is we’re going to allow. So, we’re going to allow traffic from 192.170.170.001 Then we’re going to use a comma to separate that out, and then we’re going to say that we’re going to deny all. So, specifically we could do this different, and say that we want to allow traffic from all and then deny from specific ones. But from this use case, and this is probably the most common one especially when you’re trying to lock down a certain share, you’re going to want to use this command. So, we’re typing the command, get the command prompt back again. And now let’s do that view. So, it’s view our nasa, and see if our changes are in there. So, you can see in our Host ACL, we have it. Then if we wanted to go back to our share from the [INAUDIBLE 00:05:43] and just see if those changes took. You can see in our advanced setting here, now it showing us are allow and deny all. Now, [INAUDIBLE 00:05:52] to say that I want to keep this going on my [INAUDIBLE 00:05:55] or if I want to revert back. So, there is a couple of different options. If you remember we had the clear-host-acl or the revert back. So, now I can just use this isi smb shares modify on my nasa directory. Once again, just as a reminder, use your own name if you have a specific zone. Then now I can revert my Host ACL. Now, we have that, I’m going to clear this out, and check. You can see our Host ACL is reverted back. We don’t have one set there. So, now we’re allowing traffic as long as you have the permissions to get to this file, and we don’t have one set. Well, that’s all for Isilon Quick Tips for today. Make sure to subscribe so that you never miss an episode of Isilon Quick Tips, or some of the other amazing contents that I have on my YouTube Channel here. And I will see you next time. [Music]