Implementing workload management - Amazon Redshift

Si nous fournissons une traduction de la version anglaise du guide, la version anglaise du guide aura préséance en cas de contradiction. La traduction sera une traduction automatique.

Implementing workload management

Vous pouvez utiliser la gestion de la charge de travail (WLM) pour définir plusieurs files d'attente de requêtes et acheminer les requêtes vers les files d'attente appropriées lors de l'exécution.

Dans certains cas, plusieurs sessions ou utilisateurs peuvent exécuter des requêtes simultanément. Certaines requêtes peuvent alors consommer des ressources de cluster pendant de longues périodes, ce qui affecte les performances des autres requêtes. Par exemple, supposons qu'un groupe d'utilisateurs soumette occasionnellement des requêtes de longue durée et complexes qui sélectionnent et trient des lignes de plusieurs tables volumineuses. Un autre groupe soumet fréquemment des requêtes courtes qui sélectionnent seulement quelques lignes d'une ou deux tables et s'exécutent en quelques secondes. Dans cette situation, les requêtes de courte durée devront peut-être patienter dans une file d'attente qu'une requête de longue durée se termine. WLM permet de gérer cette situation.

Vous pouvez configurer l'exécution de la gestion de la charge de travail Amazon Redshift sur automatique ou manuelle.

  • Automatic WLM

    To maximize system throughput and use resources effectively, you can enable Amazon Redshift to manage how resources are divided to run concurrent queries with automatic WLM. Automatic WLM manages the resources required to run queries. Amazon Redshift determines how many queries run concurrently and how much memory is allocated to each dispatched query. You can enable automatic WLM using the Amazon Redshift console by choosing Switch WLM mode and then choosing Auto WLM. With this choice, up to eight queues are used to manage queries, and the Memory and Concurrency on main fields are both set to Auto. You can specify a priority that reflects the business priority of the workload or users that map to each queue. The default priority of queries is set to Normal. For information about how to change the priority of queries in a queue, see Query priority. For more information, see Implementing automatic WLM.

    At runtime, you can route queries to these queues according to user groups or query groups. You can also configure a query monitoring rule (QMR) to limit long-running queries.

    Working with concurrency scaling and automatic WLM, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance. For more information, see Working with concurrency scaling.

    Note

    We recommend that you create a parameter group and choose automatic WLM to manage your query resources. For details about how to migrate from manual WLM to automatic WLM, see Migrating from manual WLM to automatic WLM.

  • Manual WLM

    Alternatively, you can manage system performance and your users' experience by modifying your WLM configuration to create separate queues for the long-running queries and the short-running queries. At runtime, you can route queries to these queues according to user groups or query groups. You can enable this manual configuration using the Amazon Redshift console by switching to Manual WLM. With this choice, you specify the queues used to manage queries, and the Memory and Concurrency on main field values. With a manual configuration, you can configure up to eight query queues and set the number of queries that can run in each of those queues concurrently. You can set up rules to route queries to particular queues based on the user running the query or labels that you specify. You can also configure the amount of memory allocated to each queue, so that large queries run in queues with more memory than other queues. You can also configure a query monitoring rule (QMR) to limit long-running queries. For more information, see Implementing manual WLM.

    Note

    We recommend configuring your manual WLM query queues with a total of 15 or fewer query slots. For more information, see Concurrency level.