Getting started with ROSA classic using AWS PrivateLink - Red Hat OpenShift Service on AWS

Getting started with ROSA classic using AWS PrivateLink

ROSA classic clusters can be deployed in a few different ways: public, private, or private with AWS PrivateLink. For more information about ROSA classic, see Deployment options. For both public and private cluster configurations, the OpenShift cluster has access to the internet, and privacy is set on the application workloads at the application layer.

If you require both the cluster and the application workloads to be private, you can configure AWS PrivateLink with ROSA classic. AWS PrivateLink is a highly available, scalable technology that ROSA uses to create a private connection between the ROSA service and cluster resources in the AWS customer account. With AWS PrivateLink, the Red Hat site reliability engineering (SRE) team can access the cluster for support and remediation purposes by using a private subnet connected to the cluster’s AWS PrivateLink endpoint.

For more information about AWS PrivateLink, see What is AWS PrivateLink?

Before getting started, make sure you completed these actions:

  • Install and configure the latest AWS CLI. For more information, see Installing or updating the latest version of the AWS CLI.

  • Install and configure the latest ROSA CLI and OpenShift Container Platform CLI. For more information, see Getting started with the ROSA CLI.

  • Service Quotas must have the required service quotas set for Amazon EC2, Amazon VPC, Amazon EBS, and Elastic Load Balancing that are needed to create and run a ROSA cluster. AWS or Red Hat may request service quota increases on your behalf as required for issue resolution. To view the required quotas, see Red Hat OpenShift Service on AWS endpoints and quotas in the AWS General Reference.

  • To receive AWS support for ROSA, you must enable AWS Business, Enterprise On-Ramp, or Enterprise support plans. Red Hat may request AWS support on your behalf as required for issue resolution. For more information, see Support for ROSA. To enable AWS Support, see the AWS Support page.

  • If you’re using AWS Organizations to manage the AWS accounts that host the ROSA service, the organization’s service control policy (SCP) must be configured to allow Red Hat to perform policy actions that’s listed in the SCP without restriction. For more information, see the ROSA SCP troubleshooting documentation. For more information about SCPs, see Service control policies (SCPs).

  • If deploying a ROSA cluster with AWS STS into an enabled AWS Region that’s disabled by default, you must update the security token to version 2 for all the Regions in the AWS account with the following command.

    aws iam set-security-token-service-preferences --global-endpoint-token-version v2Token

    For more information about enabling Regions, see Managing AWS Regions in the AWS General Reference.

To create a ROSA cluster, you must first enable the ROSA service in the AWS ROSA console. The AWS ROSA console verifies if your AWS account has the necessary AWS Marketplace permissions, service quotas, and the Elastic Load Balancing (ELB) service-linked role named AWSServiceRoleForElasticLoadBalancing. If any of these prerequisites are missing, the console provides guidance on how to configure your account to meet the prerequisites.

  1. Navigate to the ROSA console.

  2. Choose Get started.

  3. On the Verify ROSA prerequisites page, select I agree to share my contact information with Red Hat.

  4. Choose Enable ROSA .

  5. Once the page has verified your service quotas meet ROSA prerequisites and the ELB service-linked role is created, open a new terminal session to create your first ROSA cluster using the ROSA CLI.

To create a ROSA cluster that uses AWS PrivateLink, you must first configure your own Amazon VPC architecture to deploy your solution into. ROSA requires that customers configure at least one public and private subnet per Availability Zone used to create clusters. For single-AZ clusters, only use Availability Zone is used. For multi-AZ clusters, three Availability Zones are needed.

Important

If Amazon VPC requirements are not met, cluster creation fails.

The following procedure uses the AWS CLI to create both a public and private subnet into a single Availability Zone for a Single-AZ cluster. All cluster resources are in the private subnet. The public subnet routes outbound traffic by using a NAT gateway to the internet.

This example uses the CIDR block 10.0.0.0/16 for the Amazon VPC. However, you can choose a different CIDR block. For more information, see VPC sizing.

  1. Set an environment variable for the cluster name by running the following command.

    ROSA_CLUSTER_NAME=rosa-privatelink
  2. Create a VPC with a 10.0.0.0/16 CIDR block.

    aws ec2 create-vpc --cidr-block 10.0.0.0/16 --query Vpc.VpcId --output text

    The preceding command returns the ID of the new VPC. The following is an example output.

    vpc-0410832ee325aafea
  3. Using the VPC ID from the previous step, tag the VPC using the ROSA_CLUSTER_NAME variable.

    aws ec2 create-tags --resources <VPC_ID_VALUE> --tags Key=Name,Value=$ROSA_CLUSTER_NAME
  4. Enable DNS hostname support on the VPC.

    aws ec2 modify-vpc-attribute --vpc-id <VPC_ID_VALUE> --enable-dns-hostnames
  5. Create a public subnet in the VPC with a 10.0.1.0/24 CIDR block, specifying the Availability Zone where the resource should be created.

    Important

    When creating subnets, make sure that subnets are created to an Availability Zone that has ROSA instance types available. If you don’t choose a specific Availability Zone, the subnet is created in any one of the Availability Zones in the AWS Region that you specify.

    To specify a specific Availability Zone, use the --availability zone argument in the create-subnet command. You can use the rosa list instance-types command to list all ROSA instance types available. To check if an instance type is available for a given Availability Zone, use the following command.

    aws ec2 describe-instance-type-offerings --location-type availability-zone --filters Name=location,Values=<availability_zone> --region <region> --output text | egrep "<instance_type>"
    Important

    ROSA requires that customers configure at least one public and private subnet per Availability Zone used to create clusters. For single-AZ clusters, only one Availability Zone is needed. For multi-AZ clusters, three Availability Zones are needed. If these requirements are not met, cluster creation fails.

    aws ec2 create-subnet --vpc-id <VPC_ID_VALUE> --cidr-block 10.0.1.0/24 --availability-zone <AZ_NAME> --query Subnet.SubnetId --output text

    The preceding command returns the ID of the new subnet. The following is an example output.

    subnet-0b6a7e8cbc8b75920
  6. Using the subnet ID from the previous step, tag the subnet using the ROSA_CLUSTER_NAME-public variable.

    aws ec2 create-tags --resources <PUBLIC_SUBNET_ID> --tags Key=Name,Value=$ROSA_CLUSTER_NAME-public
  7. Create a private subnet in the VPC with a 10.0.0.0/24 CIDR block, specifying the same Availability Zone that the public subnet deployed into.

    aws ec2 create-subnet --vpc-id <VPC_ID_VALUE> --cidr-block 10.0.0.0/24 --availability-zone <AZ_NAME> --query Subnet.SubnetId --output text

    The preceding command returns the ID of the new subnet. The following is an example output.

    subnet-0b6a7e8cbc8b75920
  8. Using the subnet ID from the previous step, tag the subnet using the ROSA_CLUSTER_NAME-private variable.

    aws ec2 create-tags --resources <PRIVATE_SUBNET_ID> --tags Key=Name,Value=$ROSA_CLUSTER_NAME-private
  9. Create an internet gateway for outbound traffic and attach it to the VPC.

    aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId --output text aws ec2 attach-internet-gateway --vpc-id <VPC_ID_VALUE> --internet-gateway-id <IG_ID_VALUE>
  10. Tag the internet gateway with the ROSA_CLUSTER_NAME variable.

    aws ec2 create-tags --resources <IG_ID_VALUE> --tags Key=Name,Value=$ROSA_CLUSTER_NAME
  11. Create a route table for outbound traffic, associate it to the public subnet, and configure traffic to route to the internet gateway.

    aws ec2 create-route-table --vpc-id <VPC_ID_VALUE> --query RouteTable.RouteTableId --output text aws ec2 associate-route-table --subnet-id <PUBLIC_SUBNET_ID> --route-table-id <PUBLIC_RT_ID> aws ec2 create-route --route-table-id <PUBLIC_RT_ID> --destination-cidr-block 0.0.0.0/0 --gateway-id <IG_ID_VALUE>
  12. Tag the public route table with the ROSA_CLUSTER_NAME variable and verify that the route table was properly configured.

    aws ec2 create-tags --resources <PUBLIC_RT_ID> --tags Key=Name,Value=$ROSA_CLUSTER_NAME aws ec2 describe-route-tables --route-table-id <PUBLIC_RT_ID>
  13. Create a NAT gateway in the public subnet with an elastic IP address to enable traffic to the private subnet.

    aws ec2 allocate-address --domain vpc --query AllocationId --output text aws ec2 create-nat-gateway --subnet-id <PUBLIC_SUBNET_ID> --allocation-id <EIP_ADDRESS> --query NatGateway.NatGatewayId --output text
  14. Tag the NAT gateway and elastic IP address with the $ROSA_CLUSTER_NAME variable.

    aws ec2 create-tags --resources <EIP_ADDRESS> --resources <NAT_GATEWAY_ID> --tags Key=Name,Value=$ROSA_CLUSTER_NAME
  15. Create a route table for private subnet traffic, associate it to the private subnet, and configure traffic to route to the NAT gateway.

    aws ec2 create-route-table --vpc-id <VPC_ID_VALUE> --query RouteTable.RouteTableId --output text aws ec2 associate-route-table --subnet-id <PRIVATE_SUBNET_ID> --route-table-id <PRIVATE_RT_ID> aws ec2 create-route --route-table-id <PRIVATE_RT_ID> --destination-cidr-block 0.0.0.0/0 --gateway-id <NAT_GATEWAY_ID>
  16. Tag the private route table and elastic IP address with the $ROSA_CLUSTER_NAME-private variable.

    aws ec2 create-tags --resources <PRIVATE_RT_ID> <EIP_ADDRESS> --tags Key=Name,Value=$ROSA_CLUSTER_NAME-private

You can use AWS PrivateLink and the ROSA CLI to create a cluster with a single Availability Zone (Single-AZ) or multiple Availability Zones (Multi-AZ). In either case, your machine’s CIDR value must match your VPC’s CIDR value.

The following procedure uses the rosa create cluster command to create a Single-AZ ROSA cluster. To create a Multi-AZ cluster, specify multi-az in the command and the private subnet IDs for each private subnet you want you to deploy to.

Note

If you use a firewall, you must configure it so that ROSA can access the sites that it requires to function.

For more information, see AWS firewall prerequisites in the Red Hat OpenShift documentation.

  1. Create a Single-AZ cluster by running the following command.

    rosa create cluster --private-link --cluster-name=<CLUSTER_NAME> --machine-cidr=10.0.0.0/16 --subnet-ids=<PRIVATE_SUBNET_ID>
    Note

    To create a cluster that uses AWS PrivateLink with AWS Security Token Service (AWS STS) short-lived credentials, append --sts --mode auto or --sts --mode manual to the end of the rosa create cluster command.

  2. Create the cluster operator IAM roles by following the interactive prompts.

    rosa create operator-roles --interactive -c <CLUSTER_NAME>
  3. Create the OpenID Connect (OIDC) provider the cluster operators use to authenticate.

    rosa create oidc-provider --interactive -c <CLUSTER_NAME>
  4. Check the status of your cluster.

    rosa describe cluster -c <CLUSTER_NAME>
    Note

    It may take up to 40 minutes for the cluster State field to show the ready status. If provisioning fails or doesn’t show as ready after 40 minutes, see Troubleshoot ROSA cluster provisioning issues.

    To contact AWS Support or Red Hat support for assistance, see Support for ROSA.

  5. Track the progress of the cluster creation by watching the OpenShift installer logs.

    rosa logs install -c <CLUSTER_NAME> --watch

Clusters that use AWS PrivateLink create a public hosted zone and a private hosted zone in Route 53. Records within the Route 53 private hosted zone are resolvable only from within the VPC that it’s assigned to.

The Let’s Encrypt DNS-01 validation requires a public zone so that valid and publicly trusted certificates can be issued for the domain. The validation records are deleted after Let’s Encrypt validation is complete. The zone is still required for issuing and renewing these certificates, which are typically required every 60 days. Although these zones usually appear empty, a public zone serves a critical role in the validation process.

For more information about AWS private hosted zones, see Working with private zones. For more information about public hosted zones, see Working with public hosted zones.

To allow for records such as api.<cluster_domain> and *.apps.<cluster_domain> to resolve outside of the VPC, configure a Route 53 Resolver inbound endpoint.

  1. Open the Route 53 console.

  2. In the navigation pane under Resolver, choose Inbound endpoints.

  3. Choose Configure endpoints.

  4. In the upper right, use the AWS Region selector to choose the AWS Region that contains the VPC used for the cluster.

  5. Under Basic configuration, choose Inbound only and then choose Next.

  6. On the Configure inbound endpoint page, complete the General settings for inbound endpoint section. Under Security group for this endpoint, choose a security group that allows inbound UDP and TCP traffic from the remote network on destination port 53.

  7. In the IP address section, choose the Availability Zones and private subnets that were used when creating the cluster and choose Next.

  8. (Optional) Complete the Tags section.

  9. Choose Submit.

After the Route 53 Resolver internal endpoint is associated and operational, configure DNS forwarding so DNS queries can be handled by the designated servers on your network.

  1. Configure your corporate network to forward DNS queries to those IP addresses for the top-level domain, such as drow-pl-01.htno.p1.openshiftapps.com.

  2. If you’re forwarding DNS queries from one VPC to another VPC, follow the instructions in Managing forwarding rules.

  3. If you’re configuring your remote network DNS server, see your specific DNS server documentation to configure selective DNS forwarding for the installed cluster domain.

ROSA includes a built-in OAuth server. After your ROSA cluster is created, you must configure OAuth to use an identity provider. You can then add users to your configured identity provider to grant them access to your cluster. You can grant these users cluster-admin or dedicated-admin permissions as required.

You can configure different identity provider types for your cluster. The supported types include GitHub, GitHub Enterprise, GitLab, Google, LDAP, OpenID Connect, and HTPasswd identity providers.

Important

The HTPasswd identity provider is included only to enable a single, static administrator user to be created. HTPasswd isn’t supported as a general-use identity provider for ROSA.

The following procedure configures a GitHub identity provider as an example. For instructions on how to configure each of the supported identity provider types, see Configuring identity providers for AWS STS.

  1. Navigate to github.com and log in to your GitHub account.

  2. If you don’t have a GitHub organization to use for identity provisioning for your ROSA cluster, create one. For more information, see the steps in the GitHub documentation.

  3. Using the ROSA CLI’s interactive mode, configure an identity provider for your cluster by running the following command.

    rosa create idp --cluster=<CLUSTER_NAME> --interactive
  4. Follow the configuration prompts in the output to restrict cluster access to members of your GitHub organization.

    I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations ? GitHub organizations: <GITHUB_ORG_NAME> ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/<GITHUB_ORG_NAME>/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.<CLUSTER_NAME>/<RANDOM_STRING>.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=<CLUSTER_NAME>&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.<CLUSTER_NAME>/<RANDOM_STRING>.p1.openshiftapps.com - Click on 'Register application' ...
  5. Open the URL in the output, replacing <GITHUB_ORG_NAME> with the name of your GitHub organization.

  6. On the GitHub web page, choose Register application to register a new OAuth application in your GitHub organization.

  7. Use the information from the GitHub OAuth page to populate the remaining rosa create idp interactive prompts, replacing <GITHUB_CLIENT_ID> and <GITHUB_CLIENT_SECRET> with the credentials from your GitHub OAuth application.

    ... ? Client ID: <GITHUB_CLIENT_ID> ? Client Secret: [? for help] <GITHUB_CLIENT_SECRET> ? GitHub Enterprise Hostname (optional): ? Mapping method: claim I: Configuring IDP for cluster '<CLUSTER_NAME>' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps.<CLUSTER_NAME>.<RANDOM_STRING>.p1.openshiftapps.com and click on github-1.
    Note

    It might take around two minutes for the identity provider configuration to become active. If you configured a cluster-admin user, you can run the oc get pods -n openshift-authentication --watch command to watch the OAuth pods redeploy with the updated configuration.

  8. Verify the identity provider has been configured correctly.

    rosa list idps --cluster=<CLUSTER_NAME>

You can grant a user access to your cluster by adding them to the configured identity provider.

The following procedure adds a user to a GitHub organization that’s configured for identity provisioning to the cluster.

  1. Navigate to github.com and log in to your GitHub account.

  2. Invite users that require cluster access to your GitHub organization. For more information, see Inviting users to join your organization in the GitHub documentation.

After you added a user to your configured identity provider, you can grant the user cluster-admin or dedicated-admin permissions for your cluster.

  1. Grant the cluster-admin permissions using the following command. Replace <IDP_USER_NAME> and <CLUSTER_NAME> with your user and cluster name.

    rosa grant user cluster-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
  2. Verify the user is listed as a member of the cluster-admins group.

    rosa list users --cluster=<CLUSTER_NAME>

Configure dedicated-admin permissions

  1. Grant the dedicated-admin permissions with the following command. Replace <IDP_USER_NAME> and <CLUSTER_NAME> with your user and cluster name.

    rosa grant user dedicated-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
  2. Verify the user is listed as a member of the cluster-admins group.

    rosa list users --cluster=<CLUSTER_NAME>

After you created a cluster administrator user or added a user to your configured identity provider, you can log in to your cluster through the Red Hat Hybrid Cloud Console.

  1. Obtain the console URL for your cluster using the following command. Replace <CLUSTER_NAME> with the name of your cluster.

    rosa describe cluster -c <CLUSTER_NAME> | grep Console
  2. Navigate to the console URL in the output and log in.

    • If you created a cluster-admin user, log in using the provided credentials.

    • If you configured an identity provider for your cluster, choose the identity provider name in the Log in with…​ dialog and complete any authorization requests presented by your provider.

From the Red Hat Hybrid Cloud Console, you can deploy a Developer Catalog test application and expose it with a route.

  1. Navigate to Red Hat Hybrid Cloud Console and choose the cluster that you want to deploy the app into.

  2. On the cluster’s page, choose Open console.

  3. In the Administrator perspective, choose Home > Projects > Create Project.

  4. Enter a name for your project and optionally add a Display Name and Description.

  5. Choose Create to create the project.

  6. Switch to the Developer perspective and choose +Add. Make sure that the selected project is the one that was just created.

  7. In the Developer Catalog dialog, choose All services.

  8. In the Developer Catalog page, choose Languages > JavaScript from the menu.

  9. Choose Node.js, and then choose Create Application to open the Create Source-to-Image Application page.

    Note

    You might need to choose Clear All Filters to display the Node.js option.

  10. In the Git section, choose Try Sample.

  11. In the Name field, add a unique name.

  12. Choose Create.

    Note

    The new application takes several minutes to deploy.

  13. When the deployment is complete, choose the route URL for the application.

    A new tab in the browser opens with a message that’s similar to the following.

    Welcome to your Node.js application on OpenShift
  14. (Optional) Delete the application and clean up resources.

    1. In the Administrator perspective, choose Home > Projects.

    2. Open the action menu for your project and choose Delete Project.

You can revoke cluster-admin or dedicated-admin permissions from a user by using the ROSA CLI.

To revoke access from a user, you must remove the user from your configured identity provider.

  1. Revoke the cluster-admin permissions using the following command. Replace <IDP_USER_NAME> and <CLUSTER_NAME> with your user and cluster name.

    rosa revoke user cluster-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
  2. Verify that the user isn’t listed as a member of the cluster-admins group.

    rosa list users --cluster=<CLUSTER_NAME>
  1. Revoke the dedicated-admin permissions using the following command. Replace <IDP_USER_NAME> and <CLUSTER_NAME> with your user and cluster name.

    rosa revoke user dedicated-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
  2. Verify that the user isn’t listed as a member of the dedicated-admins group.

    rosa list users --cluster=<CLUSTER_NAME>

You can revoke cluster access for an identity provider user by removing them from the configured identity provider.

You can configure different types of identity providers for your cluster. The following procedure revokes cluster access for a member of a GitHub organization.

  1. Navigate to github.com and log in to your GitHub account.

  2. Remove the user from your GitHub organization. For more information, see Removing a member from your organization in the GitHub documentation.

You can use the ROSA CLI to delete a cluster that uses AWS Security Token Service (AWS STS). You can also use the ROSA CLI to delete the IAM roles and OIDC provider created by ROSA. To delete the IAM policies created by ROSA, you can use the IAM console.

Important

IAM roles and policies created by ROSA might be used by other ROSA clusters in the same account.

  1. Delete the cluster and watch the logs. Replace <CLUSTER_NAME> with the name or ID of your cluster.

    rosa delete cluster --cluster=<CLUSTER_NAME> --watch
    Important

    You must wait for the cluster to delete completely before you remove the IAM roles, policies, and OIDC provider. The account IAM roles are required to delete the resources created by the installer. The operator IAM roles are required to clean up the resources created by the OpenShift operators. The operators use the OIDC provider to authenticate.

  2. Delete the OIDC provider that the cluster operators use to authenticate by running the following command.

    rosa delete oidc-provider -c <CLUSTER_ID> --mode auto
  3. Delete the cluster-specific operator IAM roles.

    rosa delete operator-roles -c <CLUSTER_ID> --mode auto
  4. Delete the account IAM roles using the following command. Replace <PREFIX> with the prefix of the account IAM roles to delete. If you specified a custom prefix when creating the account IAM roles, specify the default ManagedOpenShift prefix.

    rosa delete account-roles --prefix <PREFIX> --mode auto
  5. Delete the IAM policies created by ROSA.

    1. Log in to the IAM console.

    2. On the left menu under Access management, choose Policies.

    3. Select the policy that you want to delete and choose Actions > Delete.

    4. Enter the policy name and choose Delete.

    5. Repeat this step to delete each of the IAM policies for the cluster.