Scaling - Serverless 3D Data Optimization Pipelines on AWS


This solution scales to meet the demands of the input files placed in your S3 bucket. For example, if you placed 350 files in the S3 bucket’s in folder, AWS Batch jobs would be started in parallel as the files arrived. During testing, we tried this experiment with a set of 350 files that took nearly 14 hours of total compute time to convert, by using this solution we received the results back in just 10 minutes through AWS Batch’s fully managed fan-out parallelization.