Troubleshooting Amazon Managed Workflows for Apache Airflow
This topic describes common issues and errors you may encounter when using Apache Airflow on Amazon Managed Workflows for Apache Airflow and recommended steps to resolve these errors.
Contents
- Troubleshooting: DAGs, Operators, Connections, and other issues in Apache Airflow v2
- Troubleshooting: DAGs, Operators, Connections, and other issues in Apache Airflow v1
- Troubleshooting: Creating and updating an Amazon MWAA environment
- Updating requirements.txt
- Plugins
- Create bucket
- Create environment
- I tried to create an environment and it's stuck in the "Creating" state
- I tried to create an environment but it shows the status as "Create failed"
- I tried to select a VPC and received a "Network Failure" error
- I tried to create an environment and received a service, partition, or resource "must be passed" error
- I tried to create an environment and it shows the status as "Available" but when I try to access the Airflow UI an "Empty Reply from Server" or "502 Bad Gateway" error is shown
- I tried to create an environment and my user name is a bunch of random character names
- Update environment
- Access environment
- Troubleshooting: CloudWatch Logs and CloudTrail errors
- Logs
- I can't see my task logs, or I received a 'Reading remote log from Cloudwatch log_group' error
- Tasks are failing without any logs
- I see a 'ResourceAlreadyExistsException' error in CloudTrail
- I see an 'Invalid request' error in CloudTrail
- I see a 'Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory' in Apache Airflow logs
- I see psycopg2 'server closed the connection unexpectedly' in my Scheduler logs
- I see 'Executor reports task instance %s finished (%s) although the task says its %s' in my DAG processing logs
- I see 'Could not read remote logs from log_group: airflow-*{*environmentName}-Task log_stream:* {*DAG_ID}/*{*TASK_ID}/*{*time}/*{*n}.log.' in my task logs
- Logs