Menu
AWS WAF Security Automations
AWS WAF Security Automations

Protection Capabilities

Web applications are vulnerable to a variety of attacks. These attacks include specially crafted requests designed to exploit a vulnerability or take control of a server; volumetric attacks designed to take down a website; or bad bots and scrapers programmed to scrape and steal web content.

This solution leverages AWS CloudFormation to quickly and easily configure AWS WAF rules that help block the following common attacks:

  • SQL injection: Attackers insert malicious SQL code into web requests in an effort to extract data from your database. This solution is designed to block web requests that contain potentially malicious SQL code.

  • Cross-site scripting: Also known as XSS, attackers use vulnerabilities in a benign website as a vehicle to inject malicious client-site scripts into a legitimate user’s web browser. This solution is designed to inspect commonly explored elements of incoming requests to identify and block XSS attacks.

  • HTTP floods: Web servers and other backend resources are at risk of Distributed Denial of Service (DDoS) attacks, such as HTTP floods. This solution automatically triggers a rate-based rule when web requests from a client exceed a configurable threshold.

  • Scanners and probes: Malicious sources scan and probe Internet-facing web applications for vulnerabilities. They send a series of requests that generate HTTP 4xx error codes, and you can use this history to help identify and block malicious source IP addresses. This solution creates an AWS Lambda function that automatically parses Amazon CloudFront or Application Load Balancer access logs, counts the number of bad requests from unique source IP addresses, and updates AWS WAF to block further scans from those addresses.

  • Known attacker origins (IP reputation lists): A number of organizations maintain reputation lists of IP addresses operated by known attackers, such as spammers, malware distributors, and botnets. This solution leverages the information in these reputation lists to help you block requests from malicious IP addresses.

  • Bots and scrapers: Operators of publicly accessible web applications have to trust that the clients accessing their content identify themselves accurately, and that they will use services as intended. However, some automated clients, such as content scrapers or bad bots, misrepresent themselves to bypass restrictions. This solution helps you identify and block bad bots and scrapers.