Leveraging AWS Marketplace Storage Solutions for Microsoft SharePoint
AWS Whitepaper


Administrative Setup

After you provision your SoftNAS Cloud NAS instances, you access the instances using the Amazon EC2 console. Because the SoftNAS EC2 instance is deployed into a private subnet within the Amazon VPC, access is restricted through a bastion host or remote desktop gateway server with access to the SoftNAS Cloud NAS security group. For more information, see Controlling Network Access to EC2 Instances Using a Bastion Server on the AWS Security Blog at The default user name is softnas and the default password is set as the instance ID, which you can find in the Amazon EC2 console. After you log in, you see a Getting Started Checklist that you can use to configure your SoftNAS storage. By following the checklist, you can set up and present your storage targets quickly.

The Amazon EBS storage volumes that you added during deployment are available to each SoftNAS Cloud NAS instance as a device that needs a partition. Using the SoftNAS administration interface, you need to partition all appropriate devices.

Optionally, you can partition devices using the SoftNAS command line interface (CLI).

ec2-user@ip-10-0-133-229:~$ /usr/local/bin/softnas-cmd parted_command partition_all -t {     "result": {         "msg": "All partitions have been created successfully.",          "records": {             "msg": "All partitions have been created successfully."         },          "success": true,          "total": 1     },      "session_id": "8756",      "success": true }

After partitioning is complete the devices are available and you can assign them to a storage pool.

Create storage pools that accommodate the storage capacity and performance requirements required. For this solution, you create separate storage pools for each Amazon AWS EBS storage device. When you configure the storage pool, you can set up an additional layer of encryption that allows SoftNAS Cloud NAS to encrypt data. You can use an encryption password or the AWS Key Management Service (KMS) to implement encryption key management. For more information, see the AWS KMS website at

Optionally, you can create storage pools using the SoftNAS CLI.

ec2-user@ip-10-0-133-229:~$ /usr/local/bin/softnas-cmd createpool /dev/xvdb quorum 0 on LUKSpassword123 standard off on -t {     "result": {         "msg": "Create pool 'quorum' was successful.",          "records": {             "Available": 7.0996566768736002,              "Used": 0.00034332275390625,              "compression": "on",              "dedup": "off",              "dedupfactor": "1.00x",              "free_numeric": 7623198310,              "free_space": "7.1G",              },              "no_disks": 5,              "optimizations": "Compress",              "pct_used": "0%",              "pool_name": "quorum",              "pool_type": "Standard",              "provisioning": "Thin",              "request_arguments": {                 "cbPoolCaseinsensitive": "off",                  "cbPoolTrim": "on",                  "forcedCreation": "on",                  "opcode": "createpool",                  "pool_name": "quorum",                  "raid_abbr": "0",                  "selectedItems": [                     {                         "disk_name": "/dev/xvdb"                     }                 ],                  "sync": "standard",                  "useLuksEncryption": "on"             },              "status": "ONLINE",              "time_updated": "Oct 16, 2017 15:43:01",              "total_numeric": 7623566950,              "total_space": "7.1G",              "used_numeric": 368640,              "used_space": "360.0K"         },          "success": true,          "total": 21     },      "session_id": "8756",      "success": true }

After you create the storage pools, you must allocate the capacity in each storage pool to SoftNAS volumes to enable remote connectivity as iSCSI LUNs and CIFS shares.

Optionally, you can create volumes with the SoftNAS CLI.

iSCSI volume example:

ec2-user@ip-10-0-133-229:~$ /usr/local/bin/softnas-cmd createvolume vol_name=quorum pool=quorum vol_type=blockdevice provisioning=thin exportNFS=off shareCIFS=off ShareISCSI=on dedup=on enable_snapshot=off schedule_name=Default hourlysnaps=0 dailysnaps=0 weeklysnaps=0 sync=always --pretty_print {     "result": {         "msg": "Volume 'LUN_quorum' created.",          "records": {             "Available": 7.0999999999999996,              "Snapshots": 0,              "Used": 5.340576171875e-05,              "cbSnapshotEnabled": "1",              "compression": "off",              "compressratio": "1.00x",              "dailysnaps": 0,              "dedup": "on",              "free_numeric": 7623566950.3999996,              "free_space": "7.1G",              "hourlysnaps": 0,              "logicalused": "0.0G",              "minimum_threshold": "0",              "nfs_export": null,              "optimizations": "Dedup",              "pct_used": "0%",              "pool": "quorum",              "provisioning": "Thin",              "replication": false,              "request_arguments": {                 "cbSnapshotEnabled": "on",                  "dailysnaps": "0",                  "dedup": "on",                  "exportNFS": "off",                  "hourlysnaps": "0",                  "opcode": "createvolume",                  "pool": "quorum",                  "provisioning": "thin",                  "schedule_name": "Default",                  "shareCIFS": "off",                  "sync": "always",                  "vol_name": "quorum",                  "vol_type": "blockdevice",                  "weeklysnaps": "0"             },              "reserve_space": 7.1000534057616997,              "reserve_units": "G",              "schedule_name": "Default",              "status": "ONLINE",              "sync": "always",              "tier": false,              "tier_disabled": null,              "tier_name": null,              "tier_order": null,              "tier_uuid": null,              "time_updated": "Oct 16, 2017 15:52:59",              "total_numeric": 7623624294.3999996,              "total_space": "7.1G",              "used_numeric": 5.340576171875e-05,              "used_space": "0.0G",              "usedbydataset": "56K",              "usedbysnapshots": "0B",              "vol_name": "LUN_quorum",              "vol_path": "-",              "vol_type": "blockdevice",              "weeklysnaps": 0         },          "success": true,          "total": 40     },      "session_id": "8756",      "success": true }

When you create the iSCSI LUNs, the associated iSCSI targets are also created. The initial iSCSI target is set up with open connectivity. However, you can update the configuration for each iSCSI target with the IQN for each iSCSI initiator as well as a user name and password that can be used for CHAP authentication between the iSCSI initiators and targets.

You can’t create the iSCSI targets or add IQN and CHAP details using the SoftNAS CLI.

Active Directory Membership

Before you can join the SoftNAS Cloud NAS instances to the Active Directory domain, you need to update the hostname of each instance (that is, the hostname used by the SoftNAS management interface, not the hostname of the EC2 instance). The default hostname is based on the IP address of the EC2 instance. Depending on the IP address, the hostname might contain too many characters to be a valid NETBIOS name, which is required for you to add it to Active Directory. Update the hostname as appropriate in the SoftNAS web management console to a NETBIOS compliant name. For more information, see the Naming conventions in Active Directory for computers, domains, sites, and OUs article on the Microsoft website at

You attach the SoftNAS instance to Active Directory by navigating to the volume configuration page and selecting Active Directory from the top-level menu. After you select the interface, you are prompted for the Active Directory domain name, enter a domain user name and password with appropriate domain join permissions to join it to the domain. If the NETBIOS hostname is too long, a prompt appears and explains what actions you need to take to correct the error before proceeding.

Optionally, you can use the SoftNAS CLI to attach the SoftNAS Cloud NAS instance to Active Directory.

ec2-user@ip-10-0-133-229:~$ # kinit -p admin-user@EXAMPLE.COM ec2-user@ip-10-0-133-229:~$ # cd /var/www/softnas/scripts ec2-user@ip-10-0-133-229:~$ # ./ -c -e EXAMPLE -f Admin-user -g your-password

SoftNAS Snap Replication

At this point, you’ve finished configuring the primary SoftNAS Cloud NAS instance. Now, you need to configure the secondary failover instance so that you can configure SNAP Replicate and SNAP HA. For the first step, follow the instructions in the previous section to set up the secondary node, but stop before you create any volumes because these are created during the replication process.


The secondary instance should only be configured to include disk partitioning and storage pool creation. The replication setup process creates all appropriate volumes, CIFS shares, and iSCSI targets as a mirror of the source instance.

After you have configured both the primary and secondary SoftNAS Cloud NAS instances, connect to the SoftNAS administration console of the primary instance and navigate to the SnapReplicate / Snap HA menu. First, you set up replication between the primary and secondary SoftNAS Cloud NAS instances. You need to do this from the primary instance. You need to use the IP address, administrative user name, and password of the secondary instance as input. After you complete the setup wizard, SnapReplicate begins replicating each iSCSI LUN from the primary instance to the secondary. After the replication process finishes, the SnapReplicate replication control plan indicates that Current State for each LUN is SNAPREPLICATED-COMPLETE and the secondary instance now has the replicated LUNs created and visible within the Volume and LUNs dashboard.

Optionally, you can set up SoftNAS SnapReplicate using the SoftNAS CLI.

ec2-user@ip-10-0-133-229:~$ # softnas-cmd snaprepcommand initsnapreplicate remotenode=”REMOTENODEIP” userid=softnas password=”PASSWORD” type=target -t 


After SnapReplicate replication has been established, you can set up Snap HA to enable high availability and failover capability for the SoftNAS Cloud NAS. In the SnapReplicate / Snap HA control panel choose Add Snap HA to begin the setup process.

During the setup process, select the Virtual-IP mode. You need to use a virtual IP address outside of the VPC CIDR block to set up Snap HA communication on the secondary network interface. When requested, enter an IP address that is not addressable within your VPC CIDR range. For instance, if the VPC CIDR block is, select any other address that doesn’t start with 10.195 can work as the virtual IP address required to set up Snap HA. It’s important to ensure that the IP address you choose doesn’t belong to another VPC or CIDR range that’s routed to from this VPC.

After you provide a virtual IP address, you need to enter an AWS Access Key ID and Secret Key. These options are greyed out if the SoftNAS_HA_IAM IAM role was attached to each instance. Choose Next to confirm that the appropriate permissions are associated with the attached IAM role. If the permissions aren’t correct, an error appears and the setup process fails. If the permissions are correct, Choose Start Install to begin the Snap HA installation and configuration.

After preparation and configuration are complete, choose Next. The Snap HA process completes the installation, and then places the SoftNAS Cloud NAS instances in high availability mode. After the SnapHA setup is complete, choose Finish.

Optionally, you can use the SoftNAS CLI to set up SoftNAS SnapHA.

ec2-user@ip-10-0-133-229:~$ # softnas-cmd hacommand add YOUR_AWS_ACCESS_KEY YOUR_AWS_SECRET_KEY VIP --pretty_print