Data protection - Applying Security Practices to a Network Workload on AWS for Communications Service Providers

Data protection

Applying security measures to protect data is an important consideration when running network workloads on AWS. The following measures to be discussed supplement the security features and security mechanisms applied for LTE and 5G systems as explained in 3GPP TS 33.401 and TS 33.501 respectively. When deploying telco network workloads, country regulations and compliance frameworks mandate that data is protected at rest, in transit and, for some cases, in processing. In addition, control frameworks must be built around data handling to verify that mechanisms, tooling and processes are in place to prevent exposure of data.

  • Data identification — The fundamental piece supporting data protection is knowing what data you need to protect. Data identification focuses on identifying and documenting the different points where data enters, is processed, and is stored throughout network workloads. It also includes the type of data, such as traffic information, subscriber information, or personally identifiable information (PII). As an example, specific network workload logs may contain identifiable subscriber information, subscriber activity and location, or network function descriptors that may be considered to be intellectual property from a given Independent Software Vendor (ISV).

  • Data classification — The CSP should then build their own data classification framework which creates and defines classification labels (for example, sensitive, non-sensitive, and so on), with the support of examples and a data classification matrix. The data classification matrix is a guide on how to build the network workload architecture and which AWS services can be used.

  • Data tagging — Aligned with the previous data classification, diverse policies for data classification and protection with encryption can be defined using resource tags. As an example, resources can be tagged for the associated network workload, the hosting environment, the existence of subscriber information, or other security considerations. This also enables advanced permissions management through the use of attribute-based access control (for users, services, and systems).

  • Data lifecycle management — Based on the previous identification and sensitivity level, define data lifecycle policies, including the data retention duration, data destruction processes, data access management, data transformation, and data sharing. For example, it may be required that you hold sensitive data for a limited amount of time, after which it is automatically destroyed or anonymized and moved to an archive. In addition, customers can use AWS features that help protect against unintended or accidental data deletion during data lifecycle. Amazon Simple Storage Service (Amazon S3) objects help prevent objects or data from being deleted or overwritten for a fixed amount of time or indefinitely.

  • Identification and classification automation — Automation supports the implementation of correct controls in a repetitive manner. For example, Amazon Macie recognizes stored sensitive data, such as PII, including names, phone numbers, Mobile Subscriber ISDN Numbers (MSISDN), International Mobile Subscriber Identity (IMSI), or International Mobile Equipment Identity (IMEI) numbers using custom data identifiers. Amazon Macie provides dashboards and alerts that give visibility into how this data is being accessed or moved. Macie also enables the addition of resource tags to be added on objects based on their sensitivity.

Protect data at rest

Telecom network functions involve the processing of sensitive data such as subscriber data, access keys, location data, and PII that is protected by regulations such as the GDPR. For example, Call Detail Records, a formatted collection of a chargeable event used in billing and accounting, contains sensitive information that may identify specific callers and calls. It is critical to identify, classify and protect this data when saved on a persistent storage to help secure and comply with applicable regulations.

The following are recommendations about how data at rest can be protected on AWS:

  • Encrypt all data at rest. Consider using a key management system such as AWS Key Management Service (AWS KMS) to generate keys for encryption and perform key lifecycle management. Optionally, customers can bring their own keys (BYOK) or connect AWS KMS to their own on-premises hardware security module (HSM) using AWS KMS External Key Store (KXS) for full control of the key material used for encryption.

  • Use automation to validate and enforce data at rest controls nearly continuously. AWS Config rules can automate the validation when non-compliant settings have been applied. For example, an AWS Config rule that checks that Amazon Elastic Block Store (EBS) encryption is enabled by default. The rule is non-compliant if the encryption is not enabled.

    • AWS KMS — A fully managed key management service used to store and manage keys used to encrypt and decrypt data. Requests to use keys in AWS KMS are logged in AWS CloudTrail, so customers can understand who used which key, in what context, and when it was used. Event data logged to AWS CloudTrail cannot be altered. Also, AWS KMS is designed so that neither AWS (including AWS employees) nor third-party providers to AWS have the ability to retrieve, view, or disclose customers' primary keys in an unencrypted format.

    • AWS KMS External Key Store (XKS) — Customers who have a regulatory need to store and use their encryption keys on-premises or outside of the AWS Cloud can do so using this feature. This capability allows you to store AWS KMS customer managed keys on an HSM that they operate on-premises or at a location of their choice.

The HSMs that XKS communicates are on-premises.

The HSMs that XKS communicates are on-premises

  • AWS Config — A service that provides a detailed view of the configuration of AWS resources on an AWS account. It also provides information on how resources are related to one another, and how they were configured in the past.

  • For workloads deployed using instance store volumes, data on Non-Volatile Memory Express (NVMe) instance store volumes are encrypted using an XTS-AES-256 cipher implemented on a hardware module on the instance itself. The keys are generated by, and only reside within, the hardware module, which is inaccessible to AWS personnel. For more information, refer to Data Protection in Amazon EC2.

Protect data in transit

3GPP TS 33.210 defines the need to implement security precautions to protect network domains using IPSEC and Transport Layer Security (TLS) encryption. These security precautions are supported in AWS, depending on the nature of the traffic, using different AWS services.

  • Enforce encryption in transit. Data in transit is the data that is sent from one network function to another. This includes both entering an AWS environment from the on-premises network, or within a VPC or subnet in an AWS account.

  • For Transmission Control Protocol (TCP) traffic, use TLS encryption. There are two AWS services for issuing and deploying X.509 certificates, whether they are public- or private-facing certificates, customized certificates, certificates you want to deploy into other AWS services, or automated certificate management and renewal.

    • AWS Private CA—Enterprise customers use this service for building a public key infrastructure (PKI) intended for private use within the organization inside the AWS Cloud. With AWS Private CA, you can create your own certificate authority (CA) hierarchy and issue certificates with it for authenticating internal users, computers, applications, services, servers, and other devices, and for signing computer code. Certificates issued by a private CA are trusted only within your organization, not on the internet.

    • AWS Certificate Manager (ACM) — This service manages certificates for customers who need a publicly trusted secure web presence using TLS. A managed service that handles the complexity of creating, storing, and renewing public and private SSL/TLS X.509 certificates and keys.

  • Block unsecured ports such as HTTP using Amazon Virtual Private Cloud (Amazon VPC) security groups or network access control list (network ACL) rules. Use AWS Config to monitor unsecured security group rules. AWS Network Firewall and AWS DNS Firewall can also be used to enable the blocking of insecure methods of communication to and from the AWS environment.

  • For non-TCP traffic such as UDP and SCTP, CSPs can apply transport encryption such as IPsec to help protect data in transit between network functions. For example, in transit encryption between on-premises and AWS, CSPs can use AWS Site-to-Site VPN with Virtual Private Gateway or a Transit Gateway. Alternatively, CSPs can use VPN virtual appliance on AWS to establish IPsec connection with on-premises network. Using an IPsec virtual appliance provides a better single tunnel bandwidth connection when compared to an AWS Site-to-Site VPN.

  • Use dedicated private connection with AWS Direct Connect between your on-premises networks and AWS. Direct Connect links support L2 encryption in MACsec. CSPs should evaluate if MACSec encryption is sufficient for encryption in transit, because this would remove the need to use IPsec tunnels.

  • If encryption in transit is required between EC2 instances running network functions and IPsec implementation is not possible for non-TLS supporting traffic, consider using Nitro VPC Encryption. Specific AWS instance types use the offload capabilities of the underlying AWS Nitro System hardware to automatically encrypt in-transit traffic between specific type of instances, using Authenticated Encryption with Associated Data (AEAD) algorithms with 256-bit encryption. Data Protection in Amazon EC2 talks more about this feature and the known considerations.

Protect data in process

Confidential computing is a term used to describe the ability to protect sensitive data in use by encrypting it while it is being processed by compute. Confidential computing provides an additional layer of security for sensitive data, as it helps prevent unauthorized access to the data even if a bad actor is able to compromise the system where the data is being processed.

  • Confidential computing is defined as the use of specialized hardware and associated firmware to protect customer data during processing from outside access. AWS Nitro System is a specialized hardware and is the underlying foundation for modern Amazon EC2 instances. The AWS Nitro system was designed to have no AWS operator access; there’s no mechanism for a system or person to log in to EC2 servers (the underlying host infrastructure), read the memory of EC2 instances, or access data stored on instance storage and encrypted Amazon Elastic Block Store (Amazon EBS) volumes.

  • AWS Nitro System is a combination of dedicated hardware and lightweight hypervisor which delivers practically all of the compute and memory resources of the host hardware to the instances for better overall performance and security. The AWS Nitro System enforces separation of duties and allows only the principals who have been specifically granted access to the data the ability to access it. For more information, refer to the Confidential computing: an AWS perspective blog post and The Security Design of the AWS Nitro System whitepaper.

  • In addition to the AWS Nitro System, use instance types that have built-in memory encryption when required as an additional measure for protecting data in processing. AWS offers:

    • 3rd generation Intel Xeon scalable processors (Ice Lake) instances, with always-on memory encryption using Total Memory Encryption (TME): for example, M6i, C6i, R6i

    • 3rd generation AMD EPYC processors (Milan) instances, which includes support for AMD Transparent Single Key Memory Encryption (TSME), for example, M6a, C6a, R6a

    • Graviton2 and Graviton3 with always-on memory encryption, dedicated caches for every vCPU, and support for pointer authentication: for example, C6g, M6g, R6g

Data access control

When running network workloads on AWS, the compromise of subscriber data such as that found on the Home Subscriber Subsystems (HSS) or User Data Management (UDM) functions, or stored data such as CDRs, can be prevented by adopting data access control measures such as:

  • Enforce controls for data access using the principle of least privilege and specific IAM principals that include prescriptive data access. 

  • Use resource policies, such as S3 bucket policies, to enforce data isolation boundaries between accounts or within the same account; for example, allowing only certain IAM principles access to specific S3 buckets which contain data of a certain type or classification.

  • Consider enforcing specific rules such as preventing files or objects from being deleted or overwritten for a fixed amount of time, or indefinitely using Amazon S3 Object Lock.

  • Consider automating the detection of unintended data access. With tools such as Amazon GuardDuty, CSPs can automatically detect suspicious activity or attempts to move data outside of defined boundaries.

    • Amazon GuardDuty — a nearly continuous security monitoring service that analyzes and processes data sources such as VPC Flow Logs, AWS CloudTrail management event logs, CloudTrail S3 data event logs, EKS audit logs, and DNS logs. It uses threat intelligence feeds such as lists of malicious IP addresses and domains, and machine learning (ML) to identify unexpected and potentially unauthorized and malicious activity within your AWS environment.

  • Consider establishing an organization wide data perimeter. Use permission guardrails that restricts access outside of an AWS organization boundary. Use resource-based policies on AWS resources, service control policies to define permission on accessing AWS resources, and VPC endpoint policies to control access on VPC endpoints. More on data perimeters on AWS can be found here.