SAP HANA on AWS
SAP HANA Guides

Operating System and Storage Configuration

Use the instructions for your operating system:

Note

For scale-out workloads, repeat these steps for every node in the cluster.

Configure Operating System – SLES for SAP 12.x

Important

In the following steps, you need to update several configuration files. We recommend taking a backup of the files before you modify them. This will help you to revert to the previous configuration if needed.

  1. After your instance is up and running, connect to the instance by using Secure Shell (SSH) and the key pair that you used to launch the instance.

    Note

    Depending on your network and security settings, you might have to first connect by using SSH to a bastion host before accessing your SAP HANA instance, or you might have to add IP addresses or ports to the security group to allow SSH access.

  2. Switch to root user.

    Alternatively, you can use sudo to execute the following commands as ec2-user.

  3. Set a hostname and fully qualified domain name (FQDN) for your instance by executing the hostnamectl command and updating the /etc/hostname file.

    # hostnamectl set-hostname --static your_hostname # echo your_hostname.example.com > /etc/hostname

    Open a new session to verify the hostname change.

  4. Ensure that the DHCLIENT_SET_HOSTNAME parameter is set to no to prevent DHCP from changing the hostname during restart.

    # grep DHCLIENT_SET_HOSTNAME /etc/sysconfig/network/dhcp
  5. Set the preserve_hostname parameter to true to ensure your hostname is preserved during restart.

    # sed -i '/preserve_hostname/ c\preserve_hostname: true' /etc/cloud/cloud.cfg
  6. Add an entry to the /etc/hosts file with the new hostname and IP address.

    ip_address hostname.example.com hostname
  7. If you are using a BYOS SLES for SAP image, register your instance with SUSE. Ensure that your subscription is for SLES for SAP.

    # SUSEConnect -r Your_Registration_Code # SUSEConnect -s
  8. Ensure that the following packages are installed:

    systemd, tuned, saptune, libgcc_s1, libstdc++6, cpupower, autofs, nvme-cli

    You can use the rpm command to check whether a package is installed.

    # rpm -qi package_name

    You can then use the zypper install command to install the missing packages.

    # zypper install package_name

    Note

    If you are importing your own SLES image, additional packages might be required to ensure that your instance is optimally setup. For the latest information, refer to the Package List section in the SLES for SAP Application Configuration Guide for SAP HANA, which is attached to SAP OSS Note 1944799

  9. Ensure that your instance is running on a kernel version that is recommended in SAP OSS Note 2205917. If needed, update your system to meet the minimum kernel version. You can check the version of the kernel and other packages by using the following command:

    # rpm -qi kernel*
  10. Start saptune daemon and use the following command to set it to automatically start when the system reboots.

    # saptune daemon start
  11. Check whether the force_latency parameter is set in the saptune configuration file.

    # grep force_latency /usr/lib/tuned/saptune/tuned.conf

    If the parameter is set, skip the next step and proceed with activating the HANA profile with saptune.

  12. Update the saptune HANA profile according to SAP OSS Note 2205917, and then run the following commands to create a custom profile for SAP HANA. This step is not required if the force_latency parameter is already set.

    # mkdir /etc/tuned/saptune # cp /usr/lib/tuned/saptune/tuned.conf /etc/tuned/saptune/tuned.conf # sed -i "/\[cpu\]/ a force_latency=70" /etc/tuned/saptune/tuned.conf # sed -i "s/script.sh/\/usr\/lib\/tuned\/saptune\/script.sh/"
  13. Switch the tuned profile to HANA and verify that all settings are configured appropriately.

    # saptune solution apply HANA # saptune solution verify HANA
  14. Configure and start the Network Time Protocol (NTP) service. You can adjust the NTP server pool based on your requirements; for example:

    Note

    Remove any existing invalid NTP server pools from /etc/ntp.conf before adding the following.

    # echo "server 0.pool.ntp.org" >> /etc/ntp.conf # echo "server 1.pool.ntp.org" >> /etc/ntp.conf # echo "server 2.pool.ntp.org" >> /etc/ntp.conf # echo "server 3.pool.ntp.org" >> /etc/ntp.conf # systemctl enable ntpd.service # systemctl start ntpd.service

    Tip

    Instead of connecting to the global NTP server pool, you can connect to your internal NTP server if needed. Or you can use Amazon Time Sync Service to keep your system time in sync.

  15. Set the clocksource to tsc by updating the current_clocksource file and the GRUB2 boot loader.

    # echo "tsc" > /sys/devices/system/clocksource/*/current_clocksource # cp /etc/default/grub /etc/default/grub.backup # sed -i '/GRUB_CMDLINE_LINUX/ s|"| clocksource=tsc"|2' /etc/default/grub # grub2-mkconfig -o /boot/grub2/grub.cfg
  16. Reboot your system for the changes to take effect.

  17. Continue with storage configuration for SAP HANA.

Configure Operating System – RHEL for SAP 7.x

Important

In the following steps, you need to update several configuration files. We recommend taking a backup of the files before you modify them. This will help you to revert to the previous configuration if needed.

  1. After your instance is up and running, connect to the instance by using Secure Shell (SSH) and the key pair that you used to launch the instance.

    Note

    Depending on your network and security settings, you might have to first connect by using SSH to a bastion host before accessing your SAP HANA instance, or you might have to add IP addresses or ports to the security group to allow SSH access.

  2. Switch to root user.

    Alternatively, you can use sudo to execute the following commands as ec2-user.

  3. Set a hostname for your instance by executing the hostnamectl command and update the /etc/cloud/cloud.cfg file to ensure that your hostname is preserved during system reboots.

    # hostnamectl set-hostname --static your_hostname # echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg

    Open a new session to verify the hostname change.

  4. Add an entry to the /etc/hosts file with the new hostname and IP address.

    ip address hostname.example.com hostname

    Ensure that the following packages are installed:

    xfsprogs, gcc, compat-sap-c++-5, compat-sap-c++-6, tuned-profiles-sap-hana, glibc.x86_64, autofs, and nvme-cli

    Note that your instance should have access to the SAP HANA channel to install libraries requires for SAP HANA installations.

    You can use the rpm command to check whether a package is installed:

    # rpm -qi package_name

    You can then install any missing packages by using the yum –y install command.

    # yum -y install package name

    Note

    Depending on your base RHEL image, additional packages might be required to ensure that your instance is optimally setup. (You can skip this step if you are using the RHEL for SAP with HA & US image.) For the latest information, refer to the RHEL configuration guide that is attached to SAP OSS Note 2009879. Review the packages in the Install Additional Required Packages section and the Appendix–Required Packages for SAP HANA on RHEL 7 section.

  5. Ensure that your instance is running on a kernel version that is recommended in SAP OSS Note 2292690. If needed, update your system to meet the minimum kernel version. You can check the version of the kernel and other packages using the following command.

    # rpm -qi kernel*
  6. Start tuned daemon and use the following commands to set it to automatically start when the system reboots.

    # systemctl start tuned # systemctl enable tuned
  7. Configure the tuned HANA profile to optimize your instance for SAP HANA workloads.

    Check whether the force_latency parameter is already set in the /usr/lib/tuned/sap-hana/tuned.conf file. If the parameter is set, execute the following commands to apply and activate the sap-hana profile.

    # tuned-adm profile sap-hana # tuned-adm active

    If the force_latency parameter is not set, execute the following steps to modify and activate the sap-hana profile.

    # mkdir /etc/tuned/sap-hana # cp /usr/lib/tuned/sap-hana/tuned.conf /etc/tuned/sap-hana/tuned.conf # sed -i '/force_latency/ c\force_latency=70' /etc/tuned/sap-hana/tuned.conf # tuned-adm profile sap-hana # tuned-adm active
  8. Disable Security-Enhanced Linux (SELinux) by running the following command. (Skip this step if you are using the RHEL for SAP with HA & US image.)

    # sed -i 's/\(SELINUX=enforcing\|SELINUX=permissive\)/SELINUX=disabled/g' \/etc/selinux/config
  9. Disable Transparent Hugepages (THP) at boot time by adding the following to the line that starts with GRUB_CMDLINE_LINUX in the /etc/default/grub file. Execute the following commands to add the required parameter and to re-configure grub (Skip this step if you are using the RHEL for SAP with HA & US image.)

    # sed -i '/GRUB_CMDLINE_LINUX/ s|"| transparent_hugepage=never"|2' /etc/default/grub # cat /etc/default/grub # grub2-mkconfig -o /boot/grub2/grub.cfg
  10. Add symbolic links by executing following commands. (Skip this step if you are using the RHEL for SAP with HA & US image.)

    # ln -s /usr/lib64/libssl.so.10 /usr/lib64/libssl.so.1.0.1 # ln -s /usr/lib64/libcrypto.so.10 /usr/lib64/libcrypto.so.1.0.1
  11. Configure and start the Network Time Protocol (NTP) service. You can adjust the NTP server pool based on your requirements. The following is just an example.

    Note

    Remove any existing invalid NTP server pools from /etc/ntp.conf before adding the following.

    # echo "server 0.pool.ntp.org" >> /etc/ntp.conf # echo "server 1.pool.ntp.org" >> /etc/ntp.conf # echo "server 2.pool.ntp.org" >> /etc/ntp.conf # echo "server 3.pool.ntp.org" >> /etc/ntp.conf # systemctl enable ntpd.service # systemctl start ntpd.service # systemctl restart systemd-timedated.service

    Tip

    Instead of connecting to the global NTP server pool, you can connect to your internal NTP server if needed. Alternatively, you can also use Amazon Time Sync Service to keep your system time in sync.

  12. Set clocksource to tsc by the updating the current_clocksource file and the GRUB2 boot loader.

    # echo "tsc" > /sys/devices/system/clocksource/*/current_clocksource # cp /etc/default/grub /etc/default/grub.backup # sed -i '/GRUB_CMDLINE_LINUX/ s|"| clocksource=tsc"|2' /etc/default/grub # grub2-mkconfig -o /boot/grub2/grub.cfg
  13. Reboot your system for the changes to take effect.

  14. After the reboot, log in as root and execute the tuned-adm command to verify that all SAP recommended settings are in place.

    # tuned-adm verify “tuned-adm verify” creates a log file under /var/log/tuned/tuned.log Review this log file and ensure that all checks have passed.
  15. Continue with storage configuration.

Configure Storage for SAP HANA

  1. Amazon EBS volumes should have been created and attached when you launched the Amazon EC2 instance. Confirm that all the required volumes are attached to the instance by running the lsblk command, which returns a list of the storage devices that are attached to the instance.

    Note

    On Nitro-based instances, Amazon EBS volumes are presented as NVME block devices. You need to perform additional mapping when configuring these volumes.

    Depending on the instance and storage volume types, your block device mapping will look similar to the following examples.

    Example from a non-Nitro instance

    # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 50G 0 disk ├─xvda1 202:1 0 1M 0 part └─xvda2 202:2 0 50G 0 part / xvdb 202:16 0 800G 0 disk xvdc 202:32 0 800G 0 disk xvdd 202:48 0 800G 0 disk xvde 202:64 0 1T 0 disk xvdf 202:80 0 4T 0 disk xvdh 202:112 0 525G 0 disk xvdr 202:4352 0 50G 0 disk #

    Example from a Nitro instance

    ## lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 50G 0 disk └─nvme0n1p1 259:1 0 50G 0 part / nvme1n1 259:2 0 4T 0 disk nvme2n1 259:3 0 800G 0 disk nvme3n1 259:4 0 800G 0 disk nvme4n1 259:5 0 800G 0 disk nvme5n1 259:6 0 525G 0 disk nvme6n1 259:7 0 1T 0 disk nvme7n1 259:8 0 50G 0 disk #
  2. Initialize the volumes of SAP HANA data, log, and backup to use with Linux Logical Volume Manager (LVM).

    Note

    Ensure you are choosing the devices that are associated with the SAP HANA data, log, and backup volumes. The device names might be different in your environment.

    Example from a non-Nitro instance

    # pvcreate /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvdf /dev/xvdh Physical volume "/dev/xvdb" successfully created. Physical volume "/dev/xvdc" successfully created. Physical volume "/dev/xvdd" successfully created. Physical volume "/dev/xvdf" successfully created. Physical volume "/dev/xvdh" successfully created. #

    Example from a Nitro instance

    # pvcreate /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1 /dev/nvme1n1 Physical volume "/dev/nvme2n1" successfully created. Physical volume "/dev/nvme3n1" successfully created. Physical volume "/dev/nvme4n1" successfully created. Physical volume "/dev/nvme5n1" successfully created. Physical volume "/dev/nvme1n1" successfully created. #
  3. Create volume groups for SAP HANA data, log, and backup. Ensure that device IDs are associated correctly with the appropriate volume group.

    Example from a non-Nitro instance

    # vgcreate vghanadata /dev/xvdb /dev/xvdc /dev/xvdd Volume group "vghanadata" successfully created # vgcreate vghanalog /dev/xvdh Volume group "vghanalog" successfully created # vgcreate vghanaback /dev/xvdf Volume group "vghanaback" successfully created #

    Example from a Nitro instance

    # vgcreate vghanadata /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 Volume group "vghanadata" successfully created # vgcreate vghanalog /dev/nvme5n1 Volume group "vghanalog" successfully created # vgcreate vghanaback /dev/nvme1n1 Volume group "vghanaback" successfully created #
  4. Create a logical volume for SAP HANA data.

    In the following command, -i 3 represents stripes based on the number of volumes that are used for a HANA data volume group. Adjust the number depending on the number of volumes that are allocated to the HANA data volume group, based on instance and storage type.

    # lvcreate -n lvhanadata -i 3 -I 256 -L 2350G vghanadata Rounding size 2.29 TiB (601600 extents) up to stripe boundary size 2.29 TiB (601602 extents). Logical volume "lvhanadata" created. #
  5. Create a logical volume for SAP HANA log.

    In the following command, -i 1 represents stripes based on the number of volumes that are used for a HANA log volume group. Adjust the number depending on the number of volumes that are allocated to the HANA log volume group, based on instance and storage type.

    # lvcreate -n lvhanalog -i 1 -I 256 -L 512G vghanalog Ignoring stripesize argument with single stripe. Logical volume "lvhanalog" created. #
  6. Create a logical volume for SAP HANA backup.

    # lvcreate -n lvhanaback -i 1 -I 256 -L 4095G vghanaback Ignoring stripesize argument with single stripe. Logical volume "lvhanaback" created. #
  7. Construct XFS file systems with the newly created logical volumes for HANA data, log, and backup by using the following commands:

    # mkfs.xfs -f /dev/mapper/vghanadata-lvhanadata # mkfs.xfs -f /dev/mapper/vghanalog-lvhanalog # mkfs.xfs -f /dev/mapper/vghanaback-lvhanaback
  8. Construct XFS file systems for HANA shared and HANA binaries.

    # mkfs.xfs -f /dev/xvde -L HANA_SHARE # mkfs.xfs -f /dev/xvdr -L USR_SAP

    Note

    On Nitro-based instance types, device names can change during instance restarts. To prevent file system mount issues, it is important to create labels for devices that aren’t part of logical volumes so that the devices can be mounted by using labels instead of the actual device names.

  9. Create directories for HANA data, log, backup, shared, and binaries.

    # mkdir /hana /hana/data /hana/log /hana/shared /backup /usr/sap
  10. Use the echo command to add entries to the /etc/fstab file with the following mount options to automatically mount these file systems during restart.

    # echo "/dev/mapper/vghanadata-lvhanadata /hana/data xfs nobarrier,noatime,nodiratime,logbsize=256k 0 0" >> /etc/fstab # echo "/dev/mapper/vghanalog-lvhanalog /hana/log xfs nobarrier,noatime,nodiratime,logbsize=256k 0 0" >> /etc/fstab # echo "/dev/mapper/vghanaback-lvhanaback /backup xfs nobarrier,noatime,nodiratime,logbsize=256k 0 0" >> /etc/fstab # echo "/dev/disk/by-label/HANA_SHARE /hana/shared xfs nobarrier,noatime,nodiratime,logbsize=256k 0 0" >> /etc/fstab # echo "/dev/disk/by-label/USR_SAP /usr/sap xfs nobarrier,noatime,nodiratime,logbsize=256k 0 0" >> /etc/fstab
  11. Mount the file systems.

    # mount -a
  12. Check to make sure that all file systems are mounted appropriately; for example, here is the output from an x1.32xlarge system:

    # df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 50G 1.8G 49G 4% / devtmpfs 961G 0 961G 0% /dev tmpfs 960G 0 960G 0% /dev/shm tmpfs 960G 17M 960G 1% /run tmpfs 960G 0 960G 0% /sys/fs/cgroup tmpfs 192G 0 192G 0% /run/user/1000 /dev/mapper/vghanadata-lvhanadata 2.3T 34M 2.3T 1% /hana/data /dev/mapper/vghanalog-lvhanalog 512G 33M 512G 1% /hana/log /dev/mapper/vghanaback-lvhanaback 4.0T 33M 4.0T 1% /backup /dev/xvde 1.0T 33M 1.0T 1% /hana/shared /dev/xvdr 50G 33M 50G 1% /usr/sap #
  13. At this time, we recommend rebooting the system and confirming that all the file systems mount automatically after the restart.

  14. If you are deploying a scale-out workload, follow the steps specified in Configure NFS for scale-out workloads to set up SAP HANA shared and backup NFS file systems with Amazon EFS.

    If you are not deploying a scale-out workload, you can now proceed with your SAP HANA software installation.

Configure NFS for scale-out workloads

Amazon EFS provides easy-to-set-up, scalable, and highly available shared file systems that can be mounted with the NFSv4 client. For scale-out workloads, we recommend using Amazon EFS for SAP HANA shared and backup volumes. You can choose between different performance options for your file systems depending on your requirements. We recommend starting with the General Purpose and Provisioned Throughput options, with approximately 100 MiB/s to 200 MiB/s throughput. To set up your file systems, do the following:

  1. Install the nfs-utils package in all the nodes in your scale-out cluster.

    • For RHEL, use yum install nfs-utils.

    • For SLES, use zypper install nfs-utils.

  2. Create two Amazon EFS file systems and target mounts for SAP HANA shared and backup in your target VPC and subnet. For detailed steps, follow the instructions specified in the AWS documentation.

  3. After the file systems are created, mount the newly created file systems in all the nodes by using the following commands:

    # mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 EFS DNS Name:/ /hana/shared # mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 EFS DNS Name:/ /backup

    Note

    If you have trouble mounting the NFS file systems, you might need to adjust your security groups to allow access to port 2049. For details, see Security Groups for Amazon EC2 Instances and Mount Targets in the AWS documentation.

  4. Add NFS mount entries to the /etc/fstab file in all the nodes to automatically mount these file systems during system restart; for example:

    # echo “nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 EFS DNS Name:/ /hana/shared” >> /etc/fstab # echo “nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 EFS DNS Name:/ /backup” >> /etc/fstab
  5. Set appropriate permissions and ownership for your target mount points.