Mengotomatiskan snapshot yang konsisten dengan aplikasi dengan skrip pra dan pasca - Amazon EBS

Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris.

Mengotomatiskan snapshot yang konsisten dengan aplikasi dengan skrip pra dan pasca

Anda dapat mengotomatiskan snapshot yang konsisten dengan aplikasi menggunakan Amazon Data Lifecycle Manager dengan mengaktifkan skrip pra dan pasca dalam kebijakan siklus hidup snapshot yang menargetkan instans.

Amazon Data Lifecycle Manager terintegrasi dengan (Systems AWS Systems Manager Manager) untuk mendukung snapshot yang konsisten dengan aplikasi. Amazon Data Lifecycle Manager menggunakan dokumen perintah Systems Manager (SSM) yang menyertakan skrip pra dan pasca untuk mengotomatisasi tindakan yang diperlukan untuk menyelesaikan snapshot yang konsisten dengan aplikasi. Sebelum memulai pembuatan snapshot, Amazon Data Lifecycle Manager menjalankan perintah dalam skrip pra untuk membekukan dan mencairkan I/O. Setelah memulai pembuatan snapshot, Amazon Data Lifecycle Manager menjalankan perintah dalam skrip pasca untuk mencairkan I/O.

Menggunakan Amazon Data Lifecycle Manager, Anda dapat mengotomatisasi snapshot yang konsisten dengan aplikasi berikut ini:

  • Aplikasi Windows yang menggunakan Volume Shadow Copy Service (VSS)

  • SAP HANA menggunakan dokumen SSDM AWS terkelola. Untuk informasi selengkapnya, lihat Snapshot Amazon EBS untuk SAP HANA.

  • Database yang dikelola sendiri, seperti MySQL, PostgreSQL atau IRIS, menggunakan templat dokumen SSM InterSystems

Memulai snapshot yang konsisten dengan aplikasi

Bagian ini menjelaskan langkah-langkah yang perlu Anda ikuti untuk mengotomatisasi snapshot yang konsisten dengan aplikasi menggunakan Amazon Data Lifecycle Manager.

Anda perlu menyiapkan instans yang ditargetkan untuk snapshot yang konsisten dengan aplikasi menggunakan Amazon Data Lifecycle Manager. Lakukan salah satu langkah berikut sesuai dengan kasus penggunaan Anda.

Prepare for VSS Backups
Untuk mempersiapkan instans target Anda untuk cadangan VSS
  1. Instal SSM Agent pada instans target Anda, jika belum diinstal. Jika SSM Agent sudah diinstal pada instans target Anda, lewati langkah ini.

    Untuk informasi selengkapnya, lihat Menginstal Agen SSM secara manual di instans Amazon EC2 untuk Windows.

  2. Pastikan SSM Agent berjalan. Untuk informasi selengkapnya, lihat Memeriksa status SSM Agent dan memulai agen.

  3. Siapkan Systems Manager untuk instans Amazon EC2. Untuk informasi selengkapnya, lihat Menyiapkan Systems Manager untuk instans Amazon EC2 di Panduan Pengguna AWS Systems Manager .

  4. Pastikan persyaratan sistem untuk cadangan VSS terpenuhi.

  5. Lampirkan profil instans dengan VSS yang diaktifkan ke instans target.

  6. Instal komponen VSS.

Prepare for SAP HANA backups
Untuk mempersiapkan instans target Anda untuk cadangan SAP HANA
  1. Siapkan lingkungan SAP HANA pada instans target Anda.

    1. Siapkan instans Anda dengan SAP HANA. Jika Anda belum memiliki lingkungan SAP HANA yang ada, Anda dapat merujuk ke Penyiapan Lingkungan SAP HANA di AWS.

    2. Masuk ke SystemDB sebagai pengguna administrator yang sesuai.

    3. Buat pengguna cadangan basis data untuk digunakan dengan Amazon Data Lifecycle Manager.

      CREATE USER username PASSWORD password NO FORCE_FIRST_PASSWORD_CHANGE;

      Misalnya, perintah berikut membuat pengguna bernama dlm_user dengan kata sandi password.

      CREATE USER dlm_user PASSWORD password NO FORCE_FIRST_PASSWORD_CHANGE;
    4. Tetapkan BACKUP OPERATOR peran ke pengguna cadangan basis data yang Anda buat di langkah sebelumnya.

      GRANT BACKUP OPERATOR TO username

      Misalnya, perintah berikut menetapkan peran untuk pengguna bernama dlm_user.

      GRANT BACKUP OPERATOR TO dlm_user
    5. Masuk ke sistem operasi sebagai administrator, misalnya sidadm.

    6. Buat entri hdbuserstore untuk menyimpan informasi koneksi sehingga dokumen SSM SAP HANA dapat terhubung ke SAP HANA tanpa pengguna harus memasukkan informasi tersebut.

      hdbuserstore set DLM_HANADB_SNAPSHOT_USER localhost:3hana_instance_number13 username password

      Misalnya:

      hdbuserstore set DLM_HANADB_SNAPSHOT_USER localhost:30013 dlm_user password
    7. Uji koneksi.

      hdbsql -U DLM_HANADB_SNAPSHOT_USER "select * from dummy"
  2. Instal SSM Agent pada instans target Anda, jika belum diinstal. Jika SSM Agent sudah diinstal pada instans target Anda, lewati langkah ini.

    Untuk informasi selengkapnya, lihat Menginstal Agen SSM secara manual di instans Amazon EC2 untuk Linux.

  3. Pastikan SSM Agent berjalan. Untuk informasi selengkapnya, lihat Memeriksa status SSM Agent dan memulai agen.

  4. Siapkan Systems Manager untuk instans Amazon EC2. Untuk informasi selengkapnya, lihat Menyiapkan Systems Manager untuk instans Amazon EC2 di Panduan Pengguna AWS Systems Manager .

Prepare for custom SSM documents
Untuk menyiapkan dokumen SSM kustom instans target Anda
  1. Instal SSM Agent pada instans target Anda, jika belum diinstal. Jika SSM Agent sudah diinstal pada instans target Anda, lewati langkah ini.

  2. Pastikan SSM Agent berjalan. Untuk informasi selengkapnya, lihat Memeriksa status SSM Agent dan memulai agen.

  3. Siapkan Systems Manager untuk instans Amazon EC2. Untuk informasi selengkapnya, lihat Menyiapkan Systems Manager untuk instans Amazon EC2 di Panduan Pengguna AWS Systems Manager .

catatan

Langkah ini hanya diperlukan untuk dokumen SSM kustom. Hal ini tidak diperlukan untuk Cadangan VSS atau SAP HANA. Untuk Pencadangan VSS dan SAP HANA, Amazon Data Lifecycle Manager menggunakan dokumen SSM terkelola. AWS

Jika Anda mengotomatiskan snapshot yang konsisten aplikasi untuk database yang dikelola sendiri, seperti MySQL, PostgreSQL, atau InterSystems IRIS, Anda harus membuat dokumen perintah SSM yang menyertakan skrip pra untuk membekukan dan menyiram I/O sebelum pembuatan snapshot dimulai, dan skrip posting untuk mencairkan I/O setelah pembuatan snapshot dimulai.

Jika database MySQL, PostgreSQL, atau IRIS menggunakan konfigurasi standar InterSystems , Anda dapat membuat dokumen perintah SSM menggunakan contoh konten dokumen SSM di bawah ini. Jika database MySQL, PostgreSQL, atau IRIS Anda menggunakan konfigurasi non-standar InterSystems , Anda dapat menggunakan konten sampel di bawah ini sebagai titik awal untuk dokumen perintah SSM Anda dan kemudian menyesuaikannya untuk memenuhi kebutuhan Anda. Atau, jika Anda ingin membuat dokumen SSM baru dari awal, Anda dapat menggunakan templat dokumen SSM kosong di bawah ini dan menambahkan praperintah dan pascaperintah Anda di bagian dokumen yang sesuai.

Perhatikan hal-hal berikut:
  • Anda bertanggung jawab untuk memastikan bahwa dokumen SSM melakukan tindakan yang benar dan diperlukan untuk konfigurasi basis data Anda.

  • Snapshot dijamin konsisten aplikasi hanya jika skrip pra dan pasca dalam dokumen SSM Anda berhasil membekukan, menyiram, dan mencairkan I/O.

  • Dokumen SSM harus menyertakan bidang wajib untuk allowedValues, termasuk, pre-script, post-script, dan dry-run. Amazon Data Lifecycle Manager akan menjalankan perintah pada instans Anda berdasarkan konten bagian tersebut. Jika dokumen SSM Anda tidak memiliki bagian tersebut, Amazon Data Lifecycle Manager akan memperlakukannya sebagai eksekusi yang gagal.

MySQL sample document content
###===============================================================================### # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # Permission is hereby granted, free of charge, to any person obtaining a copy of this # software and associated documentation files (the "Software"), to deal in the Software # without restriction, including without limitation the rights to use, copy, modify, # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ###===============================================================================### schemaVersion: '2.2' description: Amazon Data Lifecycle Manager Pre/Post script for MySQL databases parameters: executionId: type: String default: None description: (Required) Specifies the unique identifier associated with a pre and/or post execution allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$ command: # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. # 'dry-run' option is intended for validating the document execution without triggering any commands # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully # trigger pre and post script actions. type: String default: 'dry-run' description: (Required) Specifies whether pre-script and/or post-script should be executed. allowedValues: - pre-script - post-script - dry-run mainSteps: - action: aws:runShellScript description: Run MySQL Database freeze/thaw commands name: run_pre_post_scripts precondition: StringEquals: - platformType - Linux inputs: runCommand: - | #!/bin/bash ###===============================================================================### ### Error Codes ###===============================================================================### # The following Error codes will inform Data Lifecycle Manager of the type of error # and help guide handling of the error. # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field. # 1 Pre-script failed during execution - 201 # 2 Post-script failed during execution - 202 # 3 Auto thaw occurred before post-script was initiated - 203 # 4 Pre-script initiated while post-script was expected - 204 # 5 Post-script initiated while pre-script was expected - 205 # 6 Application not ready for pre or post-script initiation - 206 ###=================================================================### ### Global variables ###=================================================================### START=$(date +%s) # For testing this script locally, replace the below with OPERATION=$1. OPERATION={{ command }} FS_ALREADY_FROZEN_ERROR='freeze failed: Device or resource busy' FS_ALREADY_THAWED_ERROR='unfreeze failed: Invalid argument' FS_BUSY_ERROR='mount point is busy' # Auto thaw is a fail safe mechanism to automatically unfreeze the application after the # duration specified in the global variable below. Choose the duration based on your # database application's tolerance to freeze. export AUTO_THAW_DURATION_SECS="60" # Add all pre-script actions to be performed within the function below execute_pre_script() { echo "INFO: Start execution of pre-script" # Check if filesystem is already frozen. No error code indicates that filesystem # is not currently frozen and that the pre-script can proceed with freezing the filesystem. check_fs_freeze # Execute the DB commands to flush the DB in preparation for snapshot snap_db # Freeze the filesystem. No error code indicates that filesystem was succefully frozen freeze_fs echo "INFO: Schedule Auto Thaw to execute in ${AUTO_THAW_DURATION_SECS} seconds." $(nohup bash -c execute_schedule_auto_thaw >/dev/null 2>&1 &) } # Add all post-script actions to be performed within the function below execute_post_script() { echo "INFO: Start execution of post-script" # Unfreeze the filesystem. No error code indicates that filesystem was successfully unfrozen. unfreeze_fs thaw_db } # Execute Auto Thaw to automatically unfreeze the application after the duration configured # in the AUTO_THAW_DURATION_SECS global variable. execute_schedule_auto_thaw() { sleep ${AUTO_THAW_DURATION_SECS} execute_post_script } # Disable Auto Thaw if it is still enabled execute_disable_auto_thaw() { echo "INFO: Attempting to disable auto thaw if enabled" auto_thaw_pgid=$(pgrep -f execute_schedule_auto_thaw | xargs -i ps -hp {} -o pgid) if [ -n "${auto_thaw_pgid}" ]; then echo "INFO: execute_schedule_auto_thaw process found with pgid ${auto_thaw_pgid}" sudo pkill -g ${auto_thaw_pgid} rc=$? if [ ${rc} != 0 ]; then echo "ERROR: Unable to kill execute_schedule_auto_thaw process. retval=${rc}" else echo "INFO: Auto Thaw has been disabled" fi fi } # Iterate over all the mountpoints and check if filesystem is already in freeze state. # Return error code 204 if any of the mount points are already frozen. check_fs_freeze() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems. # Hence, we will skip the root and boot mountpoints while checking if filesystem is in freeze state. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi error_message=$(sudo mount -o remount,noatime $target 2>&1) # Remount will be a no-op without a error message if the filesystem is unfrozen. # However, if filesystem is already frozen, remount will fail with busy error message. if [ $? -ne 0 ];then # If the filesystem is already in frozen, return error code 204 if [[ "$error_message" == *"$FS_BUSY_ERROR"* ]];then echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204" exit 204 fi # If the check filesystem freeze failed due to any reason other than the filesystem already frozen, return 201 echo "ERROR: Failed to check_fs_freeze on mountpoint $target due to error - $errormessage" exit 201 fi done } # Iterate over all the mountpoints and freeze the filesystem. freeze_fs() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous. Hence, skip filesystem freeze # operations for root and boot mountpoints. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi echo "INFO: Freezing $target" error_message=$(sudo fsfreeze -f $target 2>&1) if [ $? -ne 0 ];then # If the filesystem is already in frozen, return error code 204 if [[ "$error_message" == *"$FS_ALREADY_FROZEN_ERROR"* ]]; then echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204" sudo mysql -e 'UNLOCK TABLES;' exit 204 fi # If the filesystem freeze failed due to any reason other than the filesystem already frozen, return 201 echo "ERROR: Failed to freeze mountpoint $targetdue due to error - $errormessage" thaw_db exit 201 fi echo "INFO: Freezing complete on $target" done } # Iterate over all the mountpoints and unfreeze the filesystem. unfreeze_fs() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems. # Hence, will skip the root and boot mountpoints during unfreeze as well. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi echo "INFO: Thawing $target" error_message=$(sudo fsfreeze -u $target 2>&1) # Check if filesystem is already unfrozen (thawed). Return error code 204 if filesystem is already unfrozen. if [ $? -ne 0 ]; then if [[ "$error_message" == *"$FS_ALREADY_THAWED_ERROR"* ]]; then echo "ERROR: Filesystem ${target} is already in thaw state. Return Error Code: 205" exit 205 fi # If the filesystem unfreeze failed due to any reason other than the filesystem already unfrozen, return 202 echo "ERROR: Failed to unfreeze mountpoint $targetdue due to error - $errormessage" exit 202 fi echo "INFO: Thaw complete on $target" done } snap_db() { # Run the flush command only when MySQL DB service is up and running sudo systemctl is-active --quiet mysqld.service if [ $? -eq 0 ]; then echo "INFO: Execute MySQL Flush and Lock command." sudo mysql -e 'FLUSH TABLES WITH READ LOCK;' # If the MySQL Flush and Lock command did not succeed, return error code 201 to indicate pre-script failure if [ $? -ne 0 ]; then echo "ERROR: MySQL FLUSH TABLES WITH READ LOCK command failed." exit 201 fi sync else echo "INFO: MySQL service is inactive. Skipping execution of MySQL Flush and Lock command." fi } thaw_db() { # Run the unlock command only when MySQL DB service is up and running sudo systemctl is-active --quiet mysqld.service if [ $? -eq 0 ]; then echo "INFO: Execute MySQL Unlock" sudo mysql -e 'UNLOCK TABLES;' else echo "INFO: MySQL service is inactive. Skipping execution of MySQL Unlock command." fi } export -f execute_schedule_auto_thaw export -f execute_post_script export -f unfreeze_fs export -f thaw_db # Debug logging for parameters passed to the SSM document echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}" # Based on the command parameter value execute the function that supports # pre-script/post-script operation case ${OPERATION} in pre-script) execute_pre_script ;; post-script) execute_post_script execute_disable_auto_thaw ;; dry-run) echo "INFO: dry-run option invoked - taking no action" ;; *) echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run." exit 1 # return failure ;; esac END=$(date +%s) # Debug Log for profiling the script time echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."
PostgreSQL sample document content
###===============================================================================### # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # Permission is hereby granted, free of charge, to any person obtaining a copy of this # software and associated documentation files (the "Software"), to deal in the Software # without restriction, including without limitation the rights to use, copy, modify, # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ###===============================================================================### schemaVersion: '2.2' description: Amazon Data Lifecycle Manager Pre/Post script for PostgreSQL databases parameters: executionId: type: String default: None description: (Required) Specifies the unique identifier associated with a pre and/or post execution allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$ command: # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. # 'dry-run' option is intended for validating the document execution without triggering any commands # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully # trigger pre and post script actions. type: String default: 'dry-run' description: (Required) Specifies whether pre-script and/or post-script should be executed. allowedValues: - pre-script - post-script - dry-run mainSteps: - action: aws:runShellScript description: Run PostgreSQL Database freeze/thaw commands name: run_pre_post_scripts precondition: StringEquals: - platformType - Linux inputs: runCommand: - | #!/bin/bash ###===============================================================================### ### Error Codes ###===============================================================================### # The following Error codes will inform Data Lifecycle Manager of the type of error # and help guide handling of the error. # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field. # 1 Pre-script failed during execution - 201 # 2 Post-script failed during execution - 202 # 3 Auto thaw occurred before post-script was initiated - 203 # 4 Pre-script initiated while post-script was expected - 204 # 5 Post-script initiated while pre-script was expected - 205 # 6 Application not ready for pre or post-script initiation - 206 ###===============================================================================### ### Global variables ###===============================================================================### START=$(date +%s) OPERATION={{ command }} FS_ALREADY_FROZEN_ERROR='freeze failed: Device or resource busy' FS_ALREADY_THAWED_ERROR='unfreeze failed: Invalid argument' FS_BUSY_ERROR='mount point is busy' # Auto thaw is a fail safe mechanism to automatically unfreeze the application after the # duration specified in the global variable below. Choose the duration based on your # database application's tolerance to freeze. export AUTO_THAW_DURATION_SECS="60" # Add all pre-script actions to be performed within the function below execute_pre_script() { echo "INFO: Start execution of pre-script" # Check if filesystem is already frozen. No error code indicates that filesystem # is not currently frozen and that the pre-script can proceed with freezing the filesystem. check_fs_freeze # Execute the DB commands to flush the DB in preparation for snapshot snap_db # Freeze the filesystem. No error code indicates that filesystem was succefully frozen freeze_fs echo "INFO: Schedule Auto Thaw to execute in ${AUTO_THAW_DURATION_SECS} seconds." $(nohup bash -c execute_schedule_auto_thaw >/dev/null 2>&1 &) } # Add all post-script actions to be performed within the function below execute_post_script() { echo "INFO: Start execution of post-script" # Unfreeze the filesystem. No error code indicates that filesystem was successfully unfrozen unfreeze_fs } # Execute Auto Thaw to automatically unfreeze the application after the duration configured # in the AUTO_THAW_DURATION_SECS global variable. execute_schedule_auto_thaw() { sleep ${AUTO_THAW_DURATION_SECS} execute_post_script } # Disable Auto Thaw if it is still enabled execute_disable_auto_thaw() { echo "INFO: Attempting to disable auto thaw if enabled" auto_thaw_pgid=$(pgrep -f execute_schedule_auto_thaw | xargs -i ps -hp {} -o pgid) if [ -n "${auto_thaw_pgid}" ]; then echo "INFO: execute_schedule_auto_thaw process found with pgid ${auto_thaw_pgid}" sudo pkill -g ${auto_thaw_pgid} rc=$? if [ ${rc} != 0 ]; then echo "ERROR: Unable to kill execute_schedule_auto_thaw process. retval=${rc}" else echo "INFO: Auto Thaw has been disabled" fi fi } # Iterate over all the mountpoints and check if filesystem is already in freeze state. # Return error code 204 if any of the mount points are already frozen. check_fs_freeze() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems. # Hence, we will skip the root and boot mountpoints while checking if filesystem is in freeze state. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi error_message=$(sudo mount -o remount,noatime $target 2>&1) # Remount will be a no-op without a error message if the filesystem is unfrozen. # However, if filesystem is already frozen, remount will fail with busy error message. if [ $? -ne 0 ];then # If the filesystem is already in frozen, return error code 204 if [[ "$error_message" == *"$FS_BUSY_ERROR"* ]];then echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204" exit 204 fi # If the check filesystem freeze failed due to any reason other than the filesystem already frozen, return 201 echo "ERROR: Failed to check_fs_freeze on mountpoint $target due to error - $errormessage" exit 201 fi done } # Iterate over all the mountpoints and freeze the filesystem. freeze_fs() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous. Hence, skip filesystem freeze # operations for root and boot mountpoints. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi echo "INFO: Freezing $target" error_message=$(sudo fsfreeze -f $target 2>&1) if [ $? -ne 0 ];then # If the filesystem is already in frozen, return error code 204 if [[ "$error_message" == *"$FS_ALREADY_FROZEN_ERROR"* ]]; then echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204" exit 204 fi # If the filesystem freeze failed due to any reason other than the filesystem already frozen, return 201 echo "ERROR: Failed to freeze mountpoint $targetdue due to error - $errormessage" exit 201 fi echo "INFO: Freezing complete on $target" done } # Iterate over all the mountpoints and unfreeze the filesystem. unfreeze_fs() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems. # Hence, will skip the root and boot mountpoints during unfreeze as well. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi echo "INFO: Thawing $target" error_message=$(sudo fsfreeze -u $target 2>&1) # Check if filesystem is already unfrozen (thawed). Return error code 204 if filesystem is already unfrozen. if [ $? -ne 0 ]; then if [[ "$error_message" == *"$FS_ALREADY_THAWED_ERROR"* ]]; then echo "ERROR: Filesystem ${target} is already in thaw state. Return Error Code: 205" exit 205 fi # If the filesystem unfreeze failed due to any reason other than the filesystem already unfrozen, return 202 echo "ERROR: Failed to unfreeze mountpoint $targetdue due to error - $errormessage" exit 202 fi echo "INFO: Thaw complete on $target" done } snap_db() { # Run the flush command only when PostgreSQL DB service is up and running sudo systemctl is-active --quiet postgresql if [ $? -eq 0 ]; then echo "INFO: Execute Postgres CHECKPOINT" # PostgreSQL command to flush the transactions in memory to disk sudo -u postgres psql -c 'CHECKPOINT;' # If the PostgreSQL Command did not succeed, return error code 201 to indicate pre-script failure if [ $? -ne 0 ]; then echo "ERROR: Postgres CHECKPOINT command failed." exit 201 fi sync else echo "INFO: PostgreSQL service is inactive. Skipping execution of CHECKPOINT command." fi } export -f execute_schedule_auto_thaw export -f execute_post_script export -f unfreeze_fs # Debug logging for parameters passed to the SSM document echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}" # Based on the command parameter value execute the function that supports # pre-script/post-script operation case ${OPERATION} in pre-script) execute_pre_script ;; post-script) execute_post_script execute_disable_auto_thaw ;; dry-run) echo "INFO: dry-run option invoked - taking no action" ;; *) echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run." exit 1 # return failure ;; esac END=$(date +%s) # Debug Log for profiling the script time echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."
InterSystems IRIS sample document content
###===============================================================================### # MIT License # # Copyright (c) 2024 InterSystems # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. ###===============================================================================### schemaVersion: '2.2' description: SSM Document Template for Amazon Data Lifecycle Manager Pre/Post script feature for InterSystems IRIS. parameters: executionId: type: String default: None description: Specifies the unique identifier associated with a pre and/or post execution allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$ command: type: String # Data Lifecycle Manager will trigger the pre-script and post-script actions. You can also use this SSM document with 'dry-run' for manual testing purposes. default: 'dry-run' description: (Required) Specifies whether pre-script and/or post-script should be executed. #The following allowedValues will allow Data Lifecycle Manager to successfully trigger pre and post script actions. allowedValues: - pre-script - post-script - dry-run mainSteps: - action: aws:runShellScript description: Run InterSystems IRIS Database freeze/thaw commands name: run_pre_post_scripts precondition: StringEquals: - platformType - Linux inputs: runCommand: - | #!/bin/bash ###===============================================================================### ### Global variables ###===============================================================================### DOCKER_NAME=iris LOGDIR=./ EXIT_CODE=0 OPERATION={{ command }} START=$(date +%s) # Check if Docker is installed # By default if Docker is present, script assumes that InterSystems IRIS is running in Docker # Leave only the else block DOCKER_EXEC line, if you run InterSystems IRIS non-containerised (and Docker is present). # Script assumes irissys user has OS auth enabled, change the OS user or supply login/password depending on your configuration. if command -v docker &> /dev/null then DOCKER_EXEC="docker exec $DOCKER_NAME" else DOCKER_EXEC="sudo -i -u irissys" fi # Add all pre-script actions to be performed within the function below execute_pre_script() { echo "INFO: Start execution of pre-script" # find all iris running instances iris_instances=$($DOCKER_EXEC iris qall 2>/dev/null | tail -n +3 | grep '^up' | cut -c5- | awk '{print $1}') echo "`date`: Running iris instances $iris_instances" # Only for running instances for INST in $iris_instances; do echo "`date`: Attempting to freeze $INST" # Detailed instances specific log LOGFILE=$LOGDIR/$INST-pre_post.log #check Freeze status before starting $DOCKER_EXEC irissession $INST -U '%SYS' "##Class(Backup.General).IsWDSuspendedExt()" freeze_status=$? if [ $freeze_status -eq 5 ]; then echo "`date`: ERROR: $INST IS already FROZEN" EXIT_CODE=204 else echo "`date`: $INST is not frozen" # Freeze # Docs: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=Backup.General#ExternalFreeze $DOCKER_EXEC irissession $INST -U '%SYS' "##Class(Backup.General).ExternalFreeze(\"$LOGFILE\",,,,,,600,,,300)" status=$? case $status in 5) echo "`date`: $INST IS FROZEN" ;; 3) echo "`date`: $INST FREEZE FAILED" EXIT_CODE=201 ;; *) echo "`date`: ERROR: Unknown status code: $status" EXIT_CODE=201 ;; esac echo "`date`: Completed freeze of $INST" fi done echo "`date`: Pre freeze script finished" } # Add all post-script actions to be performed within the function below execute_post_script() { echo "INFO: Start execution of post-script" # find all iris running instances iris_instances=$($DOCKER_EXEC iris qall 2>/dev/null | tail -n +3 | grep '^up' | cut -c5- | awk '{print $1}') echo "`date`: Running iris instances $iris_instances" # Only for running instances for INST in $iris_instances; do echo "`date`: Attempting to thaw $INST" # Detailed instances specific log LOGFILE=$LOGDIR/$INST-pre_post.log #check Freeze status befor starting $DOCKER_EXEC irissession $INST -U '%SYS' "##Class(Backup.General).IsWDSuspendedExt()" freeze_status=$? if [ $freeze_status -eq 5 ]; then echo "`date`: $INST is in frozen state" # Thaw # Docs: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=Backup.General#ExternalFreeze $DOCKER_EXEC irissession $INST -U%SYS "##Class(Backup.General).ExternalThaw(\"$LOGFILE\")" status=$? case $status in 5) echo "`date`: $INST IS THAWED" $DOCKER_EXEC irissession $INST -U%SYS "##Class(Backup.General).ExternalSetHistory(\"$LOGFILE\")" ;; 3) echo "`date`: $INST THAW FAILED" EXIT_CODE=202 ;; *) echo "`date`: ERROR: Unknown status code: $status" EXIT_CODE=202 ;; esac echo "`date`: Completed thaw of $INST" else echo "`date`: ERROR: $INST IS already THAWED" EXIT_CODE=205 fi done echo "`date`: Post thaw script finished" } # Debug logging for parameters passed to the SSM document echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}" # Based on the command parameter value execute the function that supports # pre-script/post-script operation case ${OPERATION} in pre-script) execute_pre_script ;; post-script) execute_post_script ;; dry-run) echo "INFO: dry-run option invoked - taking no action" ;; *) echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run." # return failure EXIT_CODE=1 ;; esac END=$(date +%s) # Debug Log for profiling the script time echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds." exit $EXIT_CODE

Untuk informasi selengkapnya, lihat GitHub repositori.

Empty document template
###===============================================================================### # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # Permission is hereby granted, free of charge, to any person obtaining a copy of this # software and associated documentation files (the "Software"), to deal in the Software # without restriction, including without limitation the rights to use, copy, modify, # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ###===============================================================================### schemaVersion: '2.2' description: SSM Document Template for Amazon Data Lifecycle Manager Pre/Post script feature parameters: executionId: type: String default: None description: (Required) Specifies the unique identifier associated with a pre and/or post execution allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$ command: # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. # 'dry-run' option is intended for validating the document execution without triggering any commands # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully # trigger pre and post script actions. type: String default: 'dry-run' description: (Required) Specifies whether pre-script and/or post-script should be executed. allowedValues: - pre-script - post-script - dry-run mainSteps: - action: aws:runShellScript description: Run Database freeze/thaw commands name: run_pre_post_scripts precondition: StringEquals: - platformType - Linux inputs: runCommand: - | #!/bin/bash ###===============================================================================### ### Error Codes ###===============================================================================### # The following Error codes will inform Data Lifecycle Manager of the type of error # and help guide handling of the error. # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field. # 1 Pre-script failed during execution - 201 # 2 Post-script failed during execution - 202 # 3 Auto thaw occurred before post-script was initiated - 203 # 4 Pre-script initiated while post-script was expected - 204 # 5 Post-script initiated while pre-script was expected - 205 # 6 Application not ready for pre or post-script initiation - 206 ###===============================================================================### ### Global variables ###===============================================================================### START=$(date +%s) # For testing this script locally, replace the below with OPERATION=$1. OPERATION={{ command }} # Add all pre-script actions to be performed within the function below execute_pre_script() { echo "INFO: Start execution of pre-script" } # Add all post-script actions to be performed within the function below execute_post_script() { echo "INFO: Start execution of post-script" } # Debug logging for parameters passed to the SSM document echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}" # Based on the command parameter value execute the function that supports # pre-script/post-script operation case ${OPERATION} in pre-script) execute_pre_script ;; post-script) execute_post_script ;; dry-run) echo "INFO: dry-run option invoked - taking no action" ;; *) echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run." exit 1 # return failure ;; esac END=$(date +%s) # Debug Log for profiling the script time echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."

Setelah Anda memiliki konten dokumen SSM, gunakan salah satu prosedur berikut untuk membuat dokumen SSM kustom.

Console
Untuk membuat dokumen perintah SSM
  1. Buka AWS Systems Manager konsol di https://console.aws.amazon.com//systems-manager/.

  2. Di panel navigasi, pilih Dokumen, lalu pilih Buat dokumen, Perintah atau Sesi.

  3. Untuk Nama, masukkan nama deskriptif untuk dokumen.

  4. Untuk jenis Target, pilih/AWS::EC2::Instance.

  5. Untuk Jenis dokumen, pilih Perintah.

  6. Di bidang Konten, pilih YAML lalu tempel konten dokumen.

  7. Di bagian Tanda dokumen, tambahkan tanda dengan kunci tanda DLMScriptsAccess, dan nilai tanda true.

    penting

    DLMScriptsAccess:trueTag diperlukan oleh kebijakan AWSDataLifecycleManagerSSMFullAccess AWS terkelola yang digunakan pada Langkah 3: Siapkan peran IAM Amazon Data Lifecycle Manager. Kebijakan menggunakan kunci syarat aws:ResourceTag untuk membatasi akses ke dokumen SSM yang memiliki tanda ini.

  8. Pilih Buat dokumen.

AWS CLI
Untuk membuat dokumen perintah SSM

Gunakan perintah create-document. Untuk --name, tentukan nama deskriptif untuk dokumen. Untuk --document-type, tentukan Command. Untuk --content, tentukan jalur ke file .yaml dengan konten dokumen SSM. Untuk --tags, tentukan "Key=DLMScriptsAccess,Value=true".

$ aws ssm create-document \ --content file://path/to/file/documentContent.yaml \ --name "document_name" \ --document-type "Command" \ --document-format YAML \ --tags "Key=DLMScriptsAccess,Value=true"
catatan

Langkah ini diperlukan jika:

  • Anda membuat atau memperbarui kebijakan snapshot skrip pra/pasca yang diaktifkan yang menggunakan peran IAM kustom.

  • Anda menggunakan baris perintah untuk membuat atau memperbarui kebijakan snapshot skrip pra/pasca yang diaktifkan yang menggunakan default.

Jika Anda menggunakan konsol untuk membuat atau memperbarui kebijakan snapshot berkemampuan skrip pra/posting yang menggunakan peran default untuk mengelola snapshot () AWSDataLifecycleManagerDefaultRole, lewati langkah ini. Dalam hal ini, kami secara otomatis melampirkan AWSDataLifecycleManagerSSMFullAccesskebijakan ke peran itu.

Anda harus memastikan bahwa peran IAM yang Anda gunakan untuk kebijakan memberikan izin kepada Amazon Data Lifecycle Manager untuk melakukan tindakan SSM yang diperlukan untuk menjalankan skrip pra dan pasca pada instans yang ditargetkan oleh kebijakan.

Amazon Data Lifecycle Manager menyediakan kebijakan terkelola (AWSDataLifecycleManagerSSMFullAccess) yang menyertakan izin yang diperlukan. Anda dapat melampirkan kebijakan ini ke peran IAM untuk mengelola snapshot guna memastikan bahwa kebijakan tersebut menyertakan izin.

penting

Kebijakan AWSDataLifecycleManagerSSMFullAccess terkelola menggunakan kunci aws:ResourceTag kondisi untuk membatasi akses ke dokumen SSM tertentu saat menggunakan skrip pra dan pasca. Untuk mengizinkan Amazon Data Lifecycle Manager mengakses dokumen SSM, Anda harus memastikan bahwa dokumen SSM Anda ditandai dengan DLMScriptsAccess:true.

Atau, Anda dapat membuat kebijakan kustom secara manual atau menetapkan izin yang diperlukan langsung ke peran IAM yang Anda gunakan. Anda dapat menggunakan izin yang sama yang ditentukan dalam kebijakan AWSDataLifecycleManagerSSMFullAccess terkelola, namun, kunci aws:ResourceTag kondisi bersifat opsional. Jika Anda memutuskan untuk tidak menyertakan kunci syarat itu, Anda tidak perlu menandai dokumen SSM Anda dengan DLMScriptsAccess:true.

Gunakan salah satu metode berikut untuk menambahkan AWSDataLifecycleManagerSSMFullAccesskebijakan ke peran IAM Anda.

Console
Untuk melampirkan kebijakan terkelola ke peran kustom
  1. Buka konsol IAM di https://console.aws.amazon.com/iam/.

  2. Di panel navigasi, pilih Peran.

  3. Cari dan pilih peran kustom Anda untuk mengelola snapshot.

  4. Pada tab Izin, pilih Tambahkan izin, Lampirkan kebijakan.

  5. Cari dan pilih kebijakan AWSDataLifecycleManagerSSMFullAccessterkelola, lalu pilih Tambahkan izin.

AWS CLI
Untuk melampirkan kebijakan terkelola ke peran kustom

Gunakan perintah attach-role-policy. Untuk ---role-name, tentukan nama peran kustom Anda. Untuk --policy-arn, tentukan arn:aws:iam::aws:policy/AWSDataLifecycleManagerSSMFullAccess.

$ aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AWSDataLifecycleManagerSSMFullAccess \ --role-name your_role_name

Untuk mengotomatisasi snapshot yang konsisten dengan aplikasi, Anda harus membuat kebijakan siklus hidup snapshot yang menargetkan instans, dan mengonfigurasi skrip pra dan pasca untuk kebijakan tersebut.

Console
Untuk membuat kebijakan siklus hidup snapshot
  1. Buka konsol Amazon EC2 di https://console.aws.amazon.com/ec2/.

  2. Di panel navigasi, pilih Elastic Block Store, Lifecycle Manager, lalu pilih Buat kebijakan siklus hidup.

  3. Pada layar Pilih jenis kebijakan, pilih Kebijakan snapshot EBS, lalu pilih Berikutnya.

  4. Di bagian Sumber daya target, lakukan hal berikut ini:

    1. Untuk Jenis sumber daya target, pilih Instance.

    2. Untuk Tanda sumber daya target, tentukan tanda sumber daya yang mengidentifikasi instans yang akan dicadangkan. Hanya sumber daya yang memiliki tanda tertentu yang akan dicadangkan.

  5. Untuk peran IAM, pilih AWSDataLifecycleManagerDefaultRole(peran default untuk mengelola snapshot), atau pilih peran khusus yang Anda buat dan siapkan untuk skrip pra dan pasca.

  6. Konfigurasikan jadwal dan opsi tambahan sesuai kebutuhan. Sebaiknya jadwalkan waktu pembuatan snapshot untuk periode waktu yang sesuai dengan beban kerja Anda, seperti selama jendela pemeliharaan.

    Untuk SAP HANA, kami menyarankan Anda mengaktifkan pemulihan snapshot cepat.

    catatan

    Jika Anda mengaktifkan jadwal untuk Cadangan VSS, Anda tidak dapat mengaktifkan Kecualikan volume data tertentu atau Salin tanda dari sumber.

  7. Di bagian Skrip pra dan pasca, pilih Aktifkan skrip pra dan pasca, lalu lakukan hal berikut, bergantung pada beban kerja Anda:

    • Untuk membuat snapshot yang konsisten dengan aplikasi dari aplikasi Windows Anda, pilih Cadangan VSS.

    • Untuk membuat snapshot yang konsisten dengan aplikasi dari beban kerja SAP HANA Anda, pilih SAP HANA.

    • Untuk membuat snapshot yang konsisten dengan aplikasi dari semua database dan beban kerja lainnya, termasuk database MySQL, PostgreSQL, atau IRIS yang dikelola sendiri, menggunakan dokumen SSM kustom, pilih dokumen SSM khusus. InterSystems

      1. Untuk Opsi otomatisasi, pilih Skrip pra dan pasca.

      2. Untuk Dokumen SSM, pilih dokumen SSM yang Anda siapkan.

  8. Bergantung pada opsi yang Anda pilih, konfigurasikan opsi tambahan berikut:

    • Batas waktu skrip — (Khusus dokumen SSM kustom) Periode batas waktu sebelum Amazon Data Lifecycle Manager menggagalkan upaya menjalankan skrip jika belum selesai. Jika skrip tidak selesai dalam periode batas waktu, Amazon Data Lifecycle Manager menggagalkan upaya tersebut. Periode batas waktu berlaku untuk skrip pra dan pasca secara individual. Periode batas waktu minimum dan default-nya adalah 10 detik. Dan periode batas waktu maksimumnya adalah 120 detik.

    • Coba lagi skrip yang gagal — Pilih opsi ini untuk mencoba lagi skrip yang tidak selesai dalam periode batas waktu. Jika skrip pra gagal, Amazon Data Lifecycle Manager akan mencoba ulang seluruh proses pembuatan snapshot, termasuk menjalankan skrip pra dan pasca. Jika skrip pasca gagal, Amazon Data Lifecycle Manager mencoba ulang skrip pasca saja; dalam hal ini, skrip pra akan selesai dan snapshot mungkin telah dibuat.

    • Default ke snapshot crash-consistent — Pilih opsi ini ke default ke snapshot crash-consistent jika skrip pra gagal dijalankan. Ini adalah perilaku pembuatan snapshot default untuk Amazon Data Lifecycle Manager jika skrip pra dan pasca tidak diaktifkan. Jika Anda mengaktifkan percobaan ulang, Amazon Data Lifecycle Manager akan default ke snapshot crash-consistent hanya setelah semua upaya percobaan ulang habis. Jika skrip pra gagal dan Anda tidak menetapkan default ke snapshot crash-consistent, Amazon Data Lifecycle Manager tidak akan membuat snapshot untuk instans selama jadwal berjalan.

      catatan

      Jika Anda membuat snapshot untuk SAP HANA, Anda mungkin ingin menonaktifkan opsi ini. Snapshot crash-consistent dari beban kerja SAP HANA tidak dapat dipulihkan dengan cara yang sama.

  9. Pilih Buat kebijakan default.

    catatan

    Jika Anda mendapatkan kesalahan Role with name AWSDataLifecycleManagerDefaultRole already exists, lihat Pemecahan Masalah untuk informasi selengkapnya.

AWS CLI
Untuk membuat kebijakan siklus hidup snapshot

Gunakan perintah create-lifecycle-policy, dan sertakan parameter Scripts dalam CreateRule. Untuk informasi selengkapnya tentang parameter, lihat Referensi API Amazon Data Lifecycle Manager.

$ aws dlm create-lifecycle-policy \ --description "policy_description" \ --state ENABLED \ --execution-role-arn iam_role_arn \ --policy-details file://policyDetails.json

Di mana policyDetails.json termasuk salah satu hal berikut, tergantung pada kasus penggunaan Anda:

  • Cadangan VSS

    { "PolicyType": "EBS_SNAPSHOT_MANAGEMENT", "ResourceTypes": [ "INSTANCE" ], "TargetTags": [{ "Key": "tag_key", "Value": "tag_value" }], "Schedules": [{ "Name": "schedule_name", "CreateRule": { "CronExpression": "cron_for_creation_frequency", "Scripts": [{ "ExecutionHandler":"AWS_VSS_BACKUP", "ExecuteOperationOnScriptFailure":true|false, "MaximumRetryCount":retries (0-3) }] }, "RetainRule": { "Count": retention_count } }] }
  • Pencadangan SAP HANA

    { "PolicyType": "EBS_SNAPSHOT_MANAGEMENT", "ResourceTypes": [ "INSTANCE" ], "TargetTags": [{ "Key": "tag_key", "Value": "tag_value" }], "Schedules": [{ "Name": "schedule_name", "CreateRule": { "CronExpression": "cron_for_creation_frequency", "Scripts": [{ "Stages": ["PRE","POST"], "ExecutionHandlerService":"AWS_SYSTEMS_MANAGER", "ExecutionHandler":"AWSSystemsManagerSAP-CreateDLMSnapshotForSAPHANA", "ExecuteOperationOnScriptFailure":true|false, "ExecutionTimeout":timeout_in_seconds (10-120), "MaximumRetryCount":retries (0-3) }] }, "RetainRule": { "Count": retention_count } }] }
  • Dokumen SSM Kustom

    { "PolicyType": "EBS_SNAPSHOT_MANAGEMENT", "ResourceTypes": [ "INSTANCE" ], "TargetTags": [{ "Key": "tag_key", "Value": "tag_value" }], "Schedules": [{ "Name": "schedule_name", "CreateRule": { "CronExpression": "cron_for_creation_frequency", "Scripts": [{ "Stages": ["PRE","POST"], "ExecutionHandlerService":"AWS_SYSTEMS_MANAGER", "ExecutionHandler":"ssm_document_name|arn", "ExecuteOperationOnScriptFailure":true|false, "ExecutionTimeout":timeout_in_seconds (10-120), "MaximumRetryCount":retries (0-3) }] }, "RetainRule": { "Count": retention_count } }] }

Pertimbangan untuk Pencadangan VSS dengan Amazon Data Lifecycle Manager

Dengan Amazon Data Lifecycle Manager, Anda dapat mencadangkan dan memulihkan aplikasi Windows berkemampuan VSS (Volume Shadow Copy Service) yang berjalan di instans Amazon EC2. Jika aplikasi memiliki penulis VSS yang terdaftar dengan Windows VSS, Amazon Data Lifecycle Manager membuat snapshot yang akan bersifat konsisten aplikasi untuk aplikasi itu.

catatan

Amazon Data Lifecycle Manager saat ini mendukung snapshot sumber daya yang konsisten aplikasi yang berjalan di Amazon EC2 saja, khusus untuk skenario pencadangan di mana data aplikasi dapat dipulihkan dengan mengganti instans yang ada dengan instans baru yang dibuat dari cadangan. Tidak semua tipe instans atau aplikasi didukung untuk pencadangan VSS. Untuk informasi lebih lanjut, lihat Apa itu AWS VSS? di Panduan Pengguna Amazon EC2.

Tipe instans yang didukung

Tipe instans Amazon EC2 berikut tidak didukung untuk pencadangan VSS. Jika kebijakan Anda menargetkan salah satu tipe instans ini, Amazon Data Lifecycle Manager mungkin masih membuat cadangan VSS, tetapi snapshot mungkin tidak ditandai dengan tanda sistem yang diperlukan. Tanpa tanda ini, snapshot tidak akan dikelola oleh Amazon Data Lifecycle Manager setelah pembuatan. Anda mungkin perlu menghapus snapshot tersebut secara manual.

  • T3: | t3.nano t3.micro

  • T3a: | t3a.nano t3a.micro

  • T2: | t2.nano t2.micro

Tanggung jawab bersama untuk snapshot yang konsisten dengan aplikasi

Anda harus memastikan bahwa:
  • Agen SSM diinstal, up-to-date, dan berjalan pada instance target Anda

  • Systems Manager memiliki izin untuk melakukan tindakan yang diperlukan pada instans target

  • Amazon Data Lifecycle Manager memiliki izin untuk melakukan tindakan Systems Manager yang diperlukan untuk menjalankan skrip pra dan pasca pada instans target.

  • Untuk beban kerja kustom, seperti database MySQL, PostgreSQL, atau InterSystems IRIS yang dikelola sendiri, dokumen SSM yang Anda gunakan menyertakan tindakan yang benar dan diperlukan untuk membekukan, membilas, dan mencairkan I/O untuk konfigurasi database Anda.

  • Waktu pembuatan snapshot selaras dengan jadwal beban kerja Anda. Misalnya, cobalah untuk menjadwalkan pembuatan snapshot selama jendela pemeliharaan terjadwal.

Amazon Data Lifecycle Manager memastikan bahwa:
  • Pembuatan snapshot dimulai dalam waktu 60 menit dari waktu pembuatan snapshot yang dijadwalkan.

  • Skrip pra dijalankan sebelum pembuatan snapshot dimulai.

  • Skrip pasca berjalan setelah skrip pra berhasil dan pembuatan snapshot telah dimulai. Amazon Data Lifecycle Manager menjalankan skrip pasca hanya jika skrip pra berhasil. Jika skrip pra gagal, Amazon Data Lifecycle Manager tidak akan menjalankan skrip pasca.

  • Snapshot ditandai dengan tanda yang sesuai pada pembuatan.

  • CloudWatch metrik dan peristiwa dipancarkan ketika skrip dimulai, dan ketika mereka gagal atau berhasil.