Crash the primary SAP HANA database on node 1 - SAP HANA on AWS

Crash the primary SAP HANA database on node 1

Description: Simulate a complete breakdown of the primary database system.

Run node: Primary SAP HANA database node

Run steps:

  • Stop the primary database system using the following command as <sid>adm.

    prihana:~ # sudo su - hdbadm hdbadm@prihana:/usr/sap/HDB/HDB00> HDB kill -9 hdbenv.sh: Hostname prihana defined in $SAP_RETRIEVAL_PATH=/usr/sap/ HDB/HDB00/prihana differs from host name defined on command line. hdbenv.sh: Error: Instance not found for host -9 killing HDB processes: kill -9 6011 /usr/sap/HDB/HDB00/prihana/trace/hdb.sapHDB_HDB00 -d -nw -f /usr/sap/HDB/HDB00/prihana/daemon.ini pf=/usr/sap/HDB/SYS/profile/HDB_HDB00_prihana kill -9 6027 hdbnameserver kill -9 6137 hdbcompileserver kill -9 6139 hdbpreprocessor kill -9 6484 hdbindexserver -port 30003 kill -9 6494 hdbxsengine -port 30007 kill -9 7068 hdbwebdispatcher kill orphan HDB processes: kill -9 6027 [hdbnameserver] <defunct> kill -9 6484 [hdbindexserver] <defunct>

Expected result:

  • The cluster detects the stopped primary SAP HANA database (on node 1) and promotes the secondary SAP HANA database (on node 2) to take over as primary.

    prihana:~ # crm status Stack: corosync Current DC: prihana (version 1.1.18+20180430.b12c320f5-3.24.1-b12c320f5) - partition with quorum Last updated: Thu Nov 12 11:53:21 2020 Last change: Thu Nov 12 11:53:19 2020 by root via crm_attribute on sechana 2 nodes configured 6 resources configured Online: [ prihana sechana ] Full list of resources: res_AWS_STONITH (stonith:external/ec2): Started prihana res_AWS_IP (ocf::suse:aws-vpc-move-ip): Started sechana Clone Set: cln_SAPHanaTopology_HDB_HDB00 [rsc_SAPHanaTopology_HDB_HDB00] Started: [ prihana sechana ] Master/Slave Set: msl_SAPHana_HDB_HDB00 [rsc_SAPHana_HDB_HDB00] Masters: [ sechana ] Slaves: [ prihana ] Failed Actions: * rsc_SAPHana_HDB_HDB00_monitor_60000 on prihana 'master (failed)' (9): call=50, status=complete, exitreason='', last-rc-change='Thu Nov 12 11:51:45 2020', queued=0ms, exec=0ms
  • The overlay IP address is migrated to the new primary (on node 2).

  • With the AUTOMATIC_REGISTER parameter set to "true", the cluster restarts the failed SAP HANA database and automatically registers it against the new primary.

Recovery procedure:

  • Clean up the cluster “failed actions” on node 1 as root.

    prihana:~ # crm resource cleanup rsc_SAPHana_HDB_HDB00 prihana Cleaned up rsc_SAPHana_HDB_HDB00:0 on prihana Cleaned up rsc_SAPHana_HDB_HDB00:1 on prihana Waiting for 1 replies from the CRMd. OK
  • After resource cleanup, the cluster “failed actions” are cleaned up.