选择您的 Cookie 首选项

我们使用必要 Cookie 和类似工具提供我们的网站和服务。我们使用性能 Cookie 收集匿名统计数据,以便我们可以了解客户如何使用我们的网站并进行改进。必要 Cookie 无法停用,但您可以单击“自定义”或“拒绝”来拒绝性能 Cookie。

如果您同意,AWS 和经批准的第三方还将使用 Cookie 提供有用的网站功能、记住您的首选项并显示相关内容,包括相关广告。要接受或拒绝所有非必要 Cookie,请单击“接受”或“拒绝”。要做出更详细的选择,请单击“自定义”。

Tightly Coupled Scenarios - High Performance Computing Lens
此页面尚未翻译为您的语言。 请求翻译

Tightly Coupled Scenarios

Tightly coupled applications consist of parallel processes that are dependent on each other to carry out the calculation. Unlike a loosely coupled computation, all processes of a tightly coupled simulation iterate together and require communication with one another. An iteration is defined as one step of the overall simulation. Tightly coupled calculations rely on tens to thousands of processes or cores over one to millions of iterations. The failure of one node usually leads to the failure of the entire calculation. To mitigate the risk of complete failure, application-level checkpointing regularly occurs during a computation to allow for the restarting of a simulation from a known state.

These simulations rely on a Message Passing Interface (MPI) for interprocess communication. Shared Memory Parallelism via OpenMP can be used with MPI. Examples of tightly coupled HPC workloads include: computational fluid dynamics, weather prediction, and reservoir simulation.

A suitable architecture for a tightly coupled HPC workload has the following considerations:

  • Network: The network requirements for tightly coupled calculations are demanding. Slow communication between nodes results in the slowdown of the entire calculation. The largest instance size, enhanced networking, and cluster placement groups are required for consistent networking performance. These techniques minimize simulation runtimes and reduce overall costs. Tightly coupled applications range in size. A large problem size, spread over a large number of processes or cores, usually parallelizes well. Small cases, with lower total computational requirements, place the greatest demand on the network. Certain Amazon EC2 instances use the Elastic Fabric Adapter (EFA) as a network interface that enables running applications that require high levels of internode communications at scale on AWS. EFA’s custom-built operating system bypass hardware interface enhances the performance of interinstance communications, which is critical to scaling tightly coupled applications.

  • Storage: Tightly coupled workloads vary in storage requirements and are driven by the dataset size and desired performance for transferring, reading, and writing the data. Temporary data storage or scratch space requires special consideration.

  • Compute: EC2 instances are offered in a variety of configurations with varying core to memory ratios. For parallel applications, it is helpful to spread memory-intensive parallel simulations across more compute nodes to lessen the memory-per-core requirements and to target the best performing instance type. Tightly coupled applications require a homogenous cluster built from similar compute nodes. Targeting the largest instance size minimizes internode network latency while providing the maximum network performance when communicating between nodes.

  • Deployment: A variety of deployment options are available. End-to-end automation is achievable, as is launching simulations in a “traditional” cluster environment. Cloud scalability enables you to launch hundreds of large multi-process cases at once, so there is no need to wait in a queue. Tightly coupled simulations can be deployed with end-to-end solutions such as AWS Batch and AWS ParallelCluster, or through solutions based on AWS services such as CloudFormation or EC2 Fleet.

隐私网站条款Cookie 首选项
© 2025, Amazon Web Services, Inc. 或其附属公司。保留所有权利。