Getting started with ROSA with HCP using the ROSA CLI in auto mode - Red Hat OpenShift Service on AWS

Getting started with ROSA with HCP using the ROSA CLI in auto mode

The following sections describe how to get started with ROSA with hosted control planes (ROSA with HCP) using AWS STS and the ROSA CLI. For more information about ROSA with HCP, see Deployment options.

The ROSA CLI uses auto mode or manual mode to create the IAM resources and OpenID Connect (OIDC) configuration required to create a ROSA cluster. auto mode automatically creates the required IAM roles and policies and OIDC provider. manual mode outputs the AWS CLI commands that are needed to create the IAM resources manually. By using manual mode, you can review the generated AWS CLI commands before running them manually. With manual mode, you can also pass the commands to another administrator or group in your organization so they can create the resources.

The procedures in this document use the auto mode of the ROSA CLI to create the required IAM resources and OIDC configuration for ROSA with HCP. For more options to get started, see Getting started with ROSA.

Prerequisites

Before getting started, make sure you completed these actions:

  • Install and configure the latest AWS CLI. For more information, see Installing or updating the latest version of the AWS CLI.

  • Install and configure the latest ROSA CLI and OpenShift Container Platform CLI. For more information, see Getting started with the ROSA CLI.

  • Service Quotas must have the required service quotas set for Amazon EC2, Amazon VPC, Amazon EBS, and Elastic Load Balancing that are needed to create and run a ROSA cluster. AWS or Red Hat may request service quota increases on your behalf as required for issue resolution. To view the required quotas, see Red Hat OpenShift Service on AWS endpoints and quotas in the AWS General Reference.

  • To receive AWS support for ROSA, you must enable AWS Business, Enterprise On-Ramp, or Enterprise support plans. Red Hat may request AWS support on your behalf as required for issue resolution. For more information, see Support for ROSA. To enable AWS Support, see the AWS Support page.

  • If you’re using AWS Organizations to manage the AWS accounts that host the ROSA service, the organization’s service control policy (SCP) must be configured to allow Red Hat to perform policy actions that’s listed in the SCP without restriction. For more information, see the AWS Organizations service control policy denies required AWS Marketplace permissions. For more information about SCPs, see Service control policies (SCPs).

  • If deploying a ROSA cluster with AWS STS into an enabled AWS Region that’s disabled by default, you must update the security token to version 2 for all the Regions in the AWS account with the following command.

    aws iam set-security-token-service-preferences --global-endpoint-token-version v2Token

    For more information about enabling Regions, see Managing AWS Regions in the AWS General Reference.

Step 1: Enable ROSA and configure prerequisites

To create a ROSA cluster, you must first enable the ROSA service in the AWS ROSA console. The AWS ROSA console verifies if your AWS account has the necessary AWS Marketplace permissions, service quotas, and the Elastic Load Balancing (ELB) service-linked role named AWSServiceRoleForElasticLoadBalancing. If any of these prerequisites are missing, the console provides guidance on how to configure your account to meet the prerequisites.

  1. Navigate to the ROSA console.

  2. Choose Get started.

  3. On the Verify ROSA prerequisites page, select I agree to share my contact information with Red Hat.

  4. Choose Enable ROSA .

  5. Once the page has verified your service quotas meet ROSA prerequisites and the ELB service-linked role is created, open a new terminal session to create your first ROSA cluster using the ROSA CLI.

Step 2: Create Amazon VPC architecture for ROSA with HCP clusters

To create a ROSA with HCP cluster, you must first configure your own Amazon VPC architecture to deploy your solution into. ROSA with HCP requires that customers configure at least one public and private subnet per Availability Zone used to create clusters. For single-AZ clusters, only use Availability Zone is used. For multi-AZ clusters, three Availability Zones are needed.

Important

If Amazon VPC requirements are not met, cluster creation fails.

The following procedure uses the AWS CLI to create both a public and private subnet into a single Availability Zone for a Single-AZ cluster. All cluster resources are in the private subnet. The public subnet routes outbound traffic by using a NAT gateway to the internet.

This example uses the CIDR block 10.0.0.0/16 for the Amazon VPC. However, you can choose a different CIDR block. For more information, see VPC sizing.

  1. Set an environment variable for the cluster name by running the following command.

    ROSA_CLUSTER_NAME=rosa-hcp
  2. Create a VPC with a 10.0.0.0/16 CIDR block.

    aws ec2 create-vpc --cidr-block 10.0.0.0/16 --query Vpc.VpcId --output text

    The preceding command returns the ID of the new VPC. The following is an example output.

    vpc-0410832ee325aafea
  3. Using the VPC ID from the previous step, tag the VPC using the ROSA_CLUSTER_NAME variable.

    aws ec2 create-tags --resources <VPC_ID_VALUE> --tags Key=Name,Value=$ROSA_CLUSTER_NAME
  4. Enable DNS hostname support on the VPC.

    aws ec2 modify-vpc-attribute --vpc-id <VPC_ID_VALUE> --enable-dns-hostnames
  5. Create a public subnet in the VPC with a 10.0.1.0/24 CIDR block, specifying the Availability Zone where the resource should be created.

    Important

    When creating subnets, make sure that subnets are created to an Availability Zone that has ROSA instance types available. If you don’t choose a specific Availability Zone, the subnet is created in any one of the Availability Zones in the AWS Region that you specify.

    To specify a specific Availability Zone, use the --availability zone argument in the create-subnet command. You can use the rosa list instance-types command to list all ROSA instance types available. To check if an instance type is available for a given Availability Zone, use the following command.

    aws ec2 describe-instance-type-offerings --location-type availability-zone --filters Name=location,Values=<availability_zone> --region <region> --output text | egrep "<instance_type>"
    Important

    ROSA with HCP requires that customers configure at least one public and private subnet per Availability Zone used to create clusters. For single-AZ clusters, only one Availability Zone is needed. For multi-AZ clusters, three Availability Zones are needed. If these requirements are not met, cluster creation fails.

    aws ec2 create-subnet --vpc-id <VPC_ID_VALUE> --cidr-block 10.0.1.0/24 --availability-zone <AZ_NAME> --query Subnet.SubnetId --output text

    The preceding command returns the ID of the new subnet. The following is an example output.

    subnet-0b6a7e8cbc8b75920
  6. Using the subnet ID from the previous step, tag the subnet using the ROSA_CLUSTER_NAME-public variable.

    aws ec2 create-tags --resources <PUBLIC_SUBNET_ID> --tags Key=Name,Value=$ROSA_CLUSTER_NAME-public
  7. Create a private subnet in the VPC with a 10.0.0.0/24 CIDR block, specifying the same Availability Zone that the public subnet deployed into.

    aws ec2 create-subnet --vpc-id <VPC_ID_VALUE> --cidr-block 10.0.0.0/24 --availability-zone <AZ_NAME> --query Subnet.SubnetId --output text

    The preceding command returns the ID of the new subnet. The following is an example output.

    subnet-0b6a7e8cbc8b75920
  8. Using the subnet ID from the previous step, tag the subnet using the ROSA_CLUSTER_NAME-private variable.

    aws ec2 create-tags --resources <PRIVATE_SUBNET_ID> --tags Key=Name,Value=$ROSA_CLUSTER_NAME-private
  9. Create an internet gateway for outbound traffic and attach it to the VPC.

    aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId --output text aws ec2 attach-internet-gateway --vpc-id <VPC_ID_VALUE> --internet-gateway-id <IG_ID_VALUE>
  10. Tag the internet gateway with the ROSA_CLUSTER_NAME variable.

    aws ec2 create-tags --resources <IG_ID_VALUE> --tags Key=Name,Value=$ROSA_CLUSTER_NAME
  11. Create a route table for outbound traffic, associate it to the public subnet, and configure traffic to route to the internet gateway.

    aws ec2 create-route-table --vpc-id <VPC_ID_VALUE> --query RouteTable.RouteTableId --output text aws ec2 associate-route-table --subnet-id <PUBLIC_SUBNET_ID> --route-table-id <PUBLIC_RT_ID> aws ec2 create-route --route-table-id <PUBLIC_RT_ID> --destination-cidr-block 0.0.0.0/0 --gateway-id <IG_ID_VALUE>
  12. Tag the public route table with the ROSA_CLUSTER_NAME variable and verify that the route table was properly configured.

    aws ec2 create-tags --resources <PUBLIC_RT_ID> --tags Key=Name,Value=$ROSA_CLUSTER_NAME aws ec2 describe-route-tables --route-table-id <PUBLIC_RT_ID>
  13. Create a NAT gateway in the public subnet with an elastic IP address to enable traffic to the private subnet.

    aws ec2 allocate-address --domain vpc --query AllocationId --output text aws ec2 create-nat-gateway --subnet-id <PUBLIC_SUBNET_ID> --allocation-id <EIP_ADDRESS> --query NatGateway.NatGatewayId --output text
  14. Tag the NAT gateway and elastic IP address with the $ROSA_CLUSTER_NAME variable.

    aws ec2 create-tags --resources <EIP_ADDRESS> --resources <NAT_GATEWAY_ID> --tags Key=Name,Value=$ROSA_CLUSTER_NAME
  15. Create a route table for private subnet traffic, associate it to the private subnet, and configure traffic to route to the NAT gateway.

    aws ec2 create-route-table --vpc-id <VPC_ID_VALUE> --query RouteTable.RouteTableId --output text aws ec2 associate-route-table --subnet-id <PRIVATE_SUBNET_ID> --route-table-id <PRIVATE_RT_ID> aws ec2 create-route --route-table-id <PRIVATE_RT_ID> --destination-cidr-block 0.0.0.0/0 --gateway-id <NAT_GATEWAY_ID>
  16. Tag the private route table and elastic IP address with the $ROSA_CLUSTER_NAME-private variable.

    aws ec2 create-tags --resources <PRIVATE_RT_ID> <EIP_ADDRESS> --tags Key=Name,Value=$ROSA_CLUSTER_NAME-private

Step 3: Create the required IAM roles and OpenID Connect configuration

Before creating a ROSA with HCP cluster, you must create the necessary IAM roles and policies and the OpenID Connect (OIDC) configuration. For more information about IAM roles and policies for ROSA with HCP, see AWS managed IAM policies for ROSA.

This procedure uses the auto mode of the ROSA CLI to automatically create the OIDC configuration necessary to create a ROSA with HCP cluster.

  1. Create the required IAM account roles and policies.

    rosa create account-roles --force-policy-creation

    The --force-policy-creation parameter updates any existing roles and policies that are present. If no roles and policies are present, the command creates these resources instead.

    Note

    If your offline access token has expired, the ROSA CLI outputs an error message stating that your authorization token needs updated. For steps to troubleshoot, see Troubleshoot ROSA CLI expired offline access tokens.

  2. Create the OpenID Connect (OIDC) configuration that enables user authentication to the cluster. This configuration is registered to be used with OpenShift Cluster Manager (OCM).

    rosa create oidc-config --mode=auto
  3. Copy the OIDC config ID provided in the ROSA CLI output. The OIDC config ID needs to be provided later to create the ROSA with HCP cluster.

  4. To verify the OIDC configurations available for clusters associated with your user organization, run the following command.

    rosa list oidc-config
  5. Create the required IAM operator roles, replacing <OIDC_CONFIG_ID> with the OIDC config ID copied previously.

    Important

    You must supply a prefix in <PREFIX_NAME> when creating the Operator roles. Failing to do so produces an error.

    rosa create operator-roles --prefix <PREFIX_NAME> --oidc-config-id <OIDC_CONFIG_ID> --hosted-cp
  6. To verify the IAM operator roles were created, run the following command:

    rosa list operator-roles

Step 4: Create a ROSA with HCP cluster with AWS STS and the ROSA CLI auto mode

You can create a ROSA with HCP cluster using AWS Security Token Service (AWS STS) and the auto mode that’s provided in the ROSA CLI. You have the option to create a cluster with a public API and Ingress or a private API and Ingress.

You can create a cluster with a single Availability Zone (Single-AZ) or multiple Availability Zones (Multi-AZ). In either case, your machine’s CIDR value must match your VPC’s CIDR value.

The following procedure uses the rosa create cluster --hosted-cp command to create a Single-AZ ROSA with HCP cluster. To create a Multi-AZ cluster, specify multi-az in the command and the private subnet IDs for each private subnet you want you to deploy to.

  1. Create a ROSA with HCP cluster with one of the following commands.

    • Create a ROSA with HCP cluster with a public API and Ingress, specifying the cluster name, operator role prefix, OIDC config ID, and public and private subnet IDs.

      rosa create cluster --cluster-name=<CLUSTER_NAME> --sts --mode=auto --hosted-cp --operator-roles-prefix <OPERATOR_ROLE_PREFIX> --oidc-config-id <OIDC_CONFIG_ID> --subnet-ids=<PUBLIC_SUBNET_ID>,<PRIVATE_SUBNET_ID>
    • Create a ROSA with HCP cluster with a private API and Ingress, specifying the cluster name, operator role prefix, OIDC config ID, and private subnet IDs.

      rosa create cluster --private --cluster-name=<CLUSTER_NAME> --sts --mode=auto --hosted-cp --subnet-ids=<PRIVATE_SUBNET_ID>
  2. Check the status of your cluster.

    rosa describe cluster -c <CLUSTER_NAME>
    Note

    If the creation process fails or the State field doesn’t change to a ready status after 10 minutes, see Troubleshoot ROSA cluster creation issues.

    To contact AWS Support or Red Hat support for assistance, see Support for ROSA.

  3. Track the progress of the cluster creation by watching the OpenShift installer logs.

    rosa logs install -c <CLUSTER_NAME> --watch

Step 5: Configure an identity provider and grant cluster access

ROSA includes a built-in OAuth server. After your cluster is created, you must configure OAuth to use an identity provider. You can then add users to your configured identity provider to grant them access to your cluster. You can grant these users cluster-admin or dedicated-admin permissions as required.

You can configure different identity provider types for your ROSA cluster. Supported types include GitHub, GitHub Enterprise, GitLab, Google, LDAP, OpenID Connect, and HTPasswd identity providers.

Important

The HTPasswd identity provider is included only to enable a single, static administrator user to be created. HTPasswd isn’t supported as a general-use identity provider for ROSA.

The following procedure configures a GitHub identity provider as an example. For instructions on how to configure each of the supported identity provider types, see Configuring identity providers for AWS STS.

  1. Navigate to github.com and log in to your GitHub account.

  2. If you don’t have a GitHub organization to use for identity provisioning for your cluster, create one. For more information, see the steps in the GitHub documentation.

  3. Using the ROSA CLI’s interactive mode, configure an identity provider for your cluster.

    rosa create idp --cluster=<CLUSTER_NAME> --interactive
  4. Follow the configuration prompts in the output to restrict cluster access to members of your GitHub organization.

    I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations ? GitHub organizations: <GITHUB_ORG_NAME> ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/<GITHUB_ORG_NAME>/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.<CLUSTER_NAME>/<RANDOM_STRING>.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=<CLUSTER_NAME>&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.<CLUSTER_NAME>/<RANDOM_STRING>.p1.openshiftapps.com - Click on 'Register application' ...
  5. Open the URL in the output, replacing <GITHUB_ORG_NAME> with the name of your GitHub organization.

  6. On the GitHub web page, choose Register application to register a new OAuth application in your GitHub organization.

  7. Use the information from the GitHub OAuth page to populate the remaining rosa create idp interactive prompts by running the following command. Replace <GITHUB_CLIENT_ID> and <GITHUB_CLIENT_SECRET> with the credentials from your GitHub OAuth application.

    ... ? Client ID: <GITHUB_CLIENT_ID> ? Client Secret: [? for help] <GITHUB_CLIENT_SECRET> ? GitHub Enterprise Hostname (optional): ? Mapping method: claim I: Configuring IDP for cluster '<CLUSTER_NAME>' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps.<CLUSTER_NAME>.<RANDOM_STRING>.p1.openshiftapps.com and click on github-1.
    Note

    It might take approximately two minutes for the identity provider configuration to become active. If you configured a cluster-admin user, you can run oc get pods -n openshift-authentication --watch to watch the OAuth pods redeploy with the updated configuration.

  8. Verify that the identity provider is configured correctly.

    rosa list idps --cluster=<CLUSTER_NAME>

Step 6: Grant user access to a cluster

You can grant a user access to your cluster by adding them to the configured identity provider.

The following procedure adds a user to a GitHub organization that’s configured for identity provisioning to the cluster.

  1. Navigate to github.com and log in to your GitHub account.

  2. Invite users that require cluster access to your GitHub organization. For more information, see Inviting users to join your organization in the GitHub documentation.

Step 7: Grant administrator permissions to a user

After you add a user to your configured identity provider, you can grant the user cluster-admin or dedicated-admin permissions for your cluster.

Configure cluster-admin permissions

  1. Grant the cluster-admin permissions by running the following command. Replace <IDP_USER_NAME> and <CLUSTER_NAME> with your user and cluster name.

    rosa grant user cluster-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
  2. Verify that the user is listed as a member of the cluster-admins group.

    rosa list users --cluster=<CLUSTER_NAME>

Configure dedicated-admin permissions

  1. Grant the dedicated-admin permissions by using the following command. Replace <IDP_USER_NAME> and <CLUSTER_NAME> with your user and cluster name by running the following command.

    rosa grant user dedicated-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
  2. Verify that the user is listed as a member of the cluster-admins group.

    rosa list users --cluster=<CLUSTER_NAME>

Step 8: Access a cluster through the Red Hat Hybrid Cloud Console

Log in to your cluster through the Red Hat Hybrid Cloud Console.

  1. Obtain the console URL for your cluster using the following command. Replace <CLUSTER_NAME> with the name of your cluster.

    rosa describe cluster -c <CLUSTER_NAME> | grep Console
  2. Navigate to the console URL in the output and log in.

    In the Log in with…​ dialog, choose the identity provider name and complete any authorization requests presented by your provider.

Step 9: Deploy an application from the Developer Catalog

From the Red Hat Hybrid Cloud Console, you can deploy a Developer Catalog test application and expose it with a route.

  1. Navigate to Red Hat Hybrid Cloud Console and choose the cluster you want to deploy the app into.

  2. On the cluster’s page, choose Open console.

  3. In the Administrator perspective, choose Home > Projects > Create Project.

  4. Enter a name for your project and optionally add a Display Name and Description.

  5. Choose Create to create the project.

  6. Switch to the Developer perspective and choose +Add. Make sure that the selected project is the one that was just created.

  7. In the Developer Catalog dialog, choose All services.

  8. In the Developer Catalog page, choose Languages > JavaScript from the menu.

  9. Choose Node.js, and then choose Create Application to open the Create Source-to-Image Application page.

    Note

    You might need to choose Clear All Filters to display the Node.js option.

  10. In the Git section, choose Try Sample.

  11. In the Name field, add a unique name.

  12. Choose Create.

    Note

    The new application takes several minutes to deploy.

  13. When the deployment is complete, choose the route URL for the application.

    A new tab in the browser opens with a message that’s similar to the following.

    Welcome to your Node.js application on OpenShift
  14. (Optional) Delete the application and clean up resources:

    1. In the Administrator perspective, choose Home > Projects.

    2. Open the action menu for your project and choose Delete Project.

Step 10: Delete a cluster and AWS STS resources

You can use the ROSA CLI to delete a cluster that uses AWS Security Token Service (AWS STS). You can also use the ROSA CLI to delete the IAM roles and OIDC provider created by ROSA. To delete the IAM policies created by ROSA, you can use the IAM console.

Important

IAM roles and policies created by ROSA might be used by other ROSA clusters in the same account.

  1. Delete the cluster and watch the logs. Replace <CLUSTER_NAME> with the name or ID of your cluster.

    rosa delete cluster --cluster=<CLUSTER_NAME> --watch
    Important

    You must wait for the cluster to delete completely before you remove the IAM roles, policies, and OIDC provider. The account IAM roles are required to delete the resources created by the installer. The operator IAM roles are required to clean up the resources created by the OpenShift operators. The operators use the OIDC provider to authenticate.

  2. Delete the OIDC provider that the cluster operators use to authenticate by running the following command.

    rosa delete oidc-provider -c <CLUSTER_ID> --mode auto
  3. Delete the cluster-specific operator IAM roles.

    rosa delete operator-roles -c <CLUSTER_ID> --mode auto
  4. Delete the account IAM roles using the following command. Replace <PREFIX> with the prefix of the account IAM roles to delete. If you specified a custom prefix when creating the account IAM roles, specify the default ManagedOpenShift prefix.

    rosa delete account-roles --prefix <PREFIX> --mode auto
  5. Delete the IAM policies created by ROSA.

    1. Log in to the IAM console.

    2. On the left menu under Access management, choose Policies.

    3. Select the policy that you want to delete and choose Actions > Delete.

    4. Enter the policy name and choose Delete.

    5. Repeat this step to delete each of the IAM policies for the cluster.