25.3 C
United States of America
Wednesday, October 30, 2024

Amazon EMR Serverless observability, Half 1: Monitor Amazon EMR Serverless staff in close to actual time utilizing Amazon CloudWatch


Amazon EMR Serverless means that you can run open supply huge information frameworks resembling Apache Spark and Apache Hive with out managing clusters and servers. With EMR Serverless, you possibly can run analytics workloads at any scale with automated scaling that resizes sources in seconds to fulfill altering information volumes and processing necessities.

We now have launched job employee metrics in Amazon CloudWatch for EMR Serverless. This characteristic means that you can monitor vCPUs, reminiscence, ephemeral storage, and disk I/O allocation and utilization metrics at an mixture employee stage to your Spark and Hive jobs.

This submit is a part of a collection about EMR Serverless observability. On this submit, we talk about the right way to use these CloudWatch metrics to watch EMR Serverless staff in close to actual time.

CloudWatch metrics for EMR Serverless

On the per-Spark job stage, EMR Serverless emits the next new metrics to CloudWatch for each driver and executors. These metrics present granular insights into job efficiency, bottlenecks, and useful resource utilization.

WorkerCpuAllocated The overall numbers of vCPU cores allotted for staff in a job run
WorkerCpuUsed The overall numbers of vCPU cores utilized by staff in a job run
WorkerMemoryAllocated The overall reminiscence in GB allotted for staff in a job run
WorkerMemoryUsed The overall reminiscence in GB utilized by staff in a job run
WorkerEphemeralStorageAllocated The variety of bytes of ephemeral storage allotted for staff in a job run
WorkerEphemeralStorageUsed The variety of bytes of ephemeral storage utilized by staff in a job run
WorkerStorageReadBytes The variety of bytes learn from storage by staff in a job run
WorkerStorageWriteBytes The variety of bytes written to storage from staff in a job run

The next are the advantages of monitoring your EMR Serverless jobs with CloudWatch:

  • Optimize useful resource utilization – You’ll be able to achieve insights into useful resource utilization patterns and optimize your EMR Serverless configurations for higher effectivity and price financial savings. For instance, underutilization of vCPUs or reminiscence can reveal useful resource wastage, permitting you to optimize employee sizes to realize potential value financial savings.
  • Diagnose widespread errors – You’ll be able to determine root causes and mitigation for widespread errors with out log diving. For instance, you possibly can monitor the utilization of ephemeral storage and mitigate disk bottlenecks by preemptively allocating extra storage per employee.
  • Achieve close to real-time insights – CloudWatch provides close to real-time monitoring capabilities, permitting you to trace the efficiency of your EMR Serverless jobs as and when they’re working, for fast detection of any anomalies or efficiency points.
  • Configure alerts and notifications – CloudWatch allows you to arrange alarms utilizing Amazon Easy Notification Service (Amazon SNS) primarily based on predefined thresholds, permitting you to obtain notifications by means of e-mail or textual content message when particular metrics attain vital ranges.
  • Conduct historic evaluation – CloudWatch shops historic information, permitting you to investigate developments over time, determine patterns, and make knowledgeable selections for capability planning and workload optimization.

Resolution overview

To additional improve this observability expertise, we’ve created an answer that gathers all these metrics on a single CloudWatch dashboard for an EMR Serverless software. That you must launch one AWS CloudFormation template per EMR Serverless software. You’ll be able to monitor all the roles submitted to a single EMR Serverless software utilizing the identical CloudWatch dashboard. To study extra about this dashboard and deploy this resolution into your personal account, consult with the EMR Serverless CloudWatch Dashboard GitHub repository.

Within the following sections, we stroll you thru how you should utilize this dashboard to carry out the next actions:

  • Optimize your useful resource utilization to save lots of prices with out impacting job efficiency
  • Diagnose failures as a consequence of widespread errors with out the necessity for log diving and resolve these errors optimally

Conditions

To run the pattern jobs supplied on this submit, it is advisable create an EMR Serverless software with default settings utilizing the AWS Administration Console or AWS Command Line Interface (AWS CLI), after which launch the CloudFormation template from the GitHub repo with the EMR Serverless software ID supplied because the enter to the template.

That you must submit all the roles on this submit to the identical EMR Serverless software. If you wish to monitor a special software, you possibly can deploy this template to your personal EMR Serverless software ID.

Optimize useful resource utilization

When working Spark jobs, you usually begin with the default configurations. It may be difficult to optimize your workload with none visibility into precise useful resource utilization. Among the commonest configurations that we’ve seen clients alter are spark.driver.cores, spark.driver.reminiscence, spark.executor.cores, and spark.executors.reminiscence.

For instance how the newly added CloudWatch dashboard worker-level metrics will help you fine-tune your job configurations for higher price-performance and enhanced useful resource utilization, let’s run the next Spark job, which makes use of the NOAA Built-in Floor Database (ISD) dataset to run some transformations and aggregations.

Use the next command to run this job on EMR Serverless. Present your Amazon Easy Storage Service (Amazon S3) bucket and EMR Serverless software ID for which you launched the CloudFormation template. Be sure that to make use of the identical software ID to submit all of the pattern jobs on this submit. Moreover, present an AWS Identification and Entry Administration (IAM) runtime function.

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-1 
 --application-id <APPLICATION_ID> 
 --execution-role-arn <JOB_ROLE_ARN> 
 --job-driver '{
 "sparkSubmit": {
 "entryPoint": "s3://<BUCKETNAME>/scripts/windycity.py",
 "entryPointArguments": ["s3://noaa-global-hourly-pds/2024/", "s3://<BUCKET_NAME>/emrs-cw-dashboard-test-1/"]
 } }'

Now let’s test the executor vCPUs and reminiscence from the CloudWatch dashboard.

This job was submitted with default EMR Serverless Spark configurations. From the Executor CPU Allotted metric within the previous screenshot, the job was allotted 396 vCPUs in complete (99 executors * 4 vCPUs per executor). Nonetheless, the job solely used a most of 110 vCPUs primarily based on Executor CPU Used. This means oversubscription of vCPU sources. Equally, the job was allotted 1,584 GB reminiscence in complete primarily based on Executor Reminiscence Allotted. Nonetheless, from the Executor Reminiscence Used metric, we see that the job solely used 176 GB of reminiscence throughout the job, indicating reminiscence oversubscription.

Now let’s rerun this job with the next adjusted configurations.

Unique Job (Default Configuration) Rerun Job (Adjusted Configuration)
spark.executor.reminiscence 14 GB 3 GB
spark.executor.cores 4 2
spark.dynamicAllocation.maxExecutors 99 30
Whole Useful resource Utilization

6.521 vCPU-hours

26.084 memoryGB-hours

32.606 storageGB-hours

1.739 vCPU-hours

3.688 memoryGB-hours

17.394 storageGB-hours

Billable Useful resource Utilization

7.046 vCPU-hours

28.182 memoryGB-hours

0 storageGB-hours

1.739 vCPU-hours

3.688 memoryGB-hours

0 storageGB-hours

We use the next code:

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-2 
 --application-id <APPLICATION_ID> 
 --execution-role-arn <JOB_ROLE_ARN> 
 --job-driver '{
 "sparkSubmit": {
 "entryPoint": "s3://<BUCKETNAME>/scripts/windycity.py",
 "entryPointArguments": ["s3://noaa-global-hourly-pds/2024/", "s3://<BUCKET_NAME>/emrs-cw-dashboard-test-2/"],
 "sparkSubmitParameters": "--conf spark.driver.cores=2 --conf spark.driver.reminiscence=3g --conf spark.executor.reminiscence=3g --conf spark.executor.cores=2 --conf spark.dynamicAllocation.maxExecutors=30"
 } }'

Let’s test the executor metrics from the CloudWatch dashboard once more for this job run.

Within the second job, we see decrease allocation of each vCPUs (396 vs. 60) and reminiscence (1,584 GB vs. 120 GB) as anticipated, leading to higher utilization of sources. The unique job ran for 4 minutes, 41 seconds. The second job took 4 minutes, 54 seconds. This reconfiguration has resulted in 79% decrease value financial savings with out affecting the job efficiency.

You should utilize these metrics to additional optimize your job by growing or reducing the variety of staff or the allotted sources.

Diagnose and resolve job failures

Utilizing the CloudWatch dashboard, you possibly can diagnose job failures as a consequence of points associated to CPU, reminiscence, and storage resembling out of reminiscence or no area left on the gadget. This allows you to determine and resolve widespread errors rapidly with out having to test the logs or navigate by means of Spark Historical past Server. Moreover, as a result of you possibly can test the useful resource utilization from the dashboard, you possibly can fine-tune the configurations by growing the required sources solely as a lot as wanted as a substitute of oversubscribing to the sources, which additional saves prices.

Driver errors

For instance this use case, let’s run the next Spark job, which creates a big Spark information body with just a few million rows. Usually, this operation is completed by the Spark driver. Whereas submitting the job, we additionally configure spark.rpc.message.maxSize, as a result of it’s required for process serialization of information frames with a lot of columns.

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-3 
--application-id <APPLICATION_ID> 
--execution-role-arn <JOB_ROLE_ARN> 
--job-driver '{
"sparkSubmit": {
"entryPoint": "s3://<BUCKETNAME>/scripts/create-large-disk.py"
"sparkSubmitParameters": "--conf spark.rpc.message.maxSize=2000"
} }'

After a couple of minutes, the job failed with the error message “Encountered errors when releasing containers,” as seen within the Job particulars part.

When encountering non-descriptive error messages, it turns into essential to research additional by inspecting the motive force and executor logs to troubleshoot additional. However earlier than additional log diving, let’s first test the CloudWatch dashboard, particularly the motive force metrics, as a result of releasing containers is usually carried out by the motive force.

We will see that the Driver CPU Used and Driver Storage Used are nicely inside their respective allotted values. Nonetheless, upon checking Driver Reminiscence Allotted and Driver Reminiscence Used, we are able to see that the motive force was utilizing all the 16 GB reminiscence allotted to it. By default, EMR Serverless drivers are assigned 16 GB reminiscence.

Let’s rerun the job with extra driver reminiscence allotted. Let’s set driver reminiscence to 27 GB as the start line, as a result of spark.driver.reminiscence + spark.driver.memoryOverhead must be lower than 30 GB for the default employee sort. park.rpc.messsage.maxSize can be unchanged.

aws emr-serverless start-job-run 
—title emrs-cw-dashboard-test-4 
—application-id <APPLICATION_ID> 
—execution-role-arn <JOB_ROLE_ARN> 
—job-driver '{
"sparkSubmit": {
"entryPoint": "s3://<BUCKETNAME>/scripts/create-large-disk.py"
"sparkSubmitParameters": "--conf spark.driver.reminiscence=27G --conf spark.rpc.message.maxSize=2000"
} }'

The job succeeded this time round. Let’s test the CloudWatch dashboard to look at driver reminiscence utilization.

As we are able to see, the allotted reminiscence is now 30 GB, however the precise driver reminiscence utilization didn’t exceed 21 GB throughout the job run. Due to this fact, we are able to additional optimize prices right here by decreasing the worth of spark.driver.reminiscence. We reran the identical job with spark.driver.reminiscence set to 22 GB, and the job nonetheless succeeded with higher driver reminiscence utilization.

Executor errors

Utilizing CloudWatch for observability is good for diagnosing driver-related points as a result of there is just one driver per job and driver sources used is the precise useful resource utilization of the one driver. However, executor metrics are aggregated throughout all the employees. Nonetheless, you should utilize this dashboard to supply solely an sufficient quantity of sources to make your job succeed, thereby avoiding oversubscription of sources.

For instance, let’s run the next Spark job, which simulates uniform disk over-utilization throughout all staff by processing very giant NOAA datasets from a number of years. This job additionally transiently caches a really giant information body on disk.

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-5 
--application-id <APPLICATION_ID> 
--execution-role-arn <JOB_ROLE_ARN> 
--job-driver '{
"sparkSubmit": {
"entryPoint": "s3://<BUCKETNAME>/scripts/noaa-disk.py"
} }'

After a couple of minutes, we are able to see that the job failed with “No area left on gadget” error within the Job particulars part, which signifies that a number of the staff have run out of disk area.

Checking the Operating Executors metric from the dashboard, we are able to determine that there have been 99 executor staff working. Every employee comes with 20 GB storage by default.

As a result of this can be a Spark process failure, let’s test the Executor Storage Allotted and Executor Storage Used metrics from the dashboard (as a result of the motive force gained’t run any duties).

As we are able to see, the 99 executors have used up a complete of 1,940 GB from the entire allotted executor storage of two,126 GB. This consists of each the information shuffled by the executors and the storage used for caching the information body. We don’t see the complete 2,126 GB being utilized from this graph as a result of there is likely to be just a few executors out of the 99 executors that weren’t holding a lot information when the job failed (earlier than these executors may begin processing duties and retailer the information body chunks).

Let’s rerun the identical job however with elevated executor disk dimension utilizing the parameter spark.emr-serverless.executor.disk. Let’s strive with 40 GB disk per executor as a place to begin.

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-6 
--application-id <APPLICATION_ID> 
--execution-role-arn <JOB_ROLE_ARN> 
--job-driver '{
"sparkSubmit": {
"entryPoint": "s3://<BUCKETNAME>/scripts/noaa-disk.py"
"sparkSubmitParameters": "--conf spark.emr-serverless.executor.disk=40G"
}
}'

This time, the job ran efficiently. Let’s test the Executor Storage Allotted and Executor Storage Used metrics.

Executor Storage Allotted is now 4,251 GB as a result of we’ve doubled the worth of spark.emr-serverless.executor.disk. Though there’s now twice as a lot aggregated executors’ storage, the job nonetheless used solely a most of 1,940 GB out of 4,251 GB. This means that our executors had been seemingly working out of disk area solely by just a few GBs. Due to this fact, we are able to attempt to set spark.emr-serverless.executor.disk to a fair decrease worth like 25 GB or 30 GB as a substitute of 40 GB to save lots of storage prices as we did within the earlier situation. As well as, you possibly can monitor Executor Storage Learn Bytes and Executor Storage Write Bytes to see in case your job is I/O intensive. On this case, you should utilize the Shuffle-optimized disks characteristic of EMR Serverless to additional improve your job’s I/O efficiency.

The dashboard can be helpful to seize details about transient storage used whereas caching or persisting the information frames, together with spill-to-disk situations. The Storage tab of Spark Historical past Server information any caching actions, as seen within the following screenshot. Nonetheless, this information can be misplaced from Spark Historical past Server after the cache is evicted or when the job finishes. Due to this fact, Executor Storage Used can be utilized to do an evaluation of a failed job run as a consequence of transient storage points.

On this specific instance, the information was evenly distributed among the many executors. Nonetheless, you probably have an information skew (for, instance just one–2 executors out of 99 course of probably the most quantity of information, and because of this, your job runs out of disk area), the CloudWatch dashboard gained’t precisely seize this situation as a result of the storage information is aggregated throughout all of the executors for a job. For diagnosing points on the particular person executor stage, we have to observe per-executor-level metrics. We discover extra superior examples of how per-worker-level metrics will help you determine, mitigate, and resolve hard-to-find points by means of EMR Serverless integration with Amazon Managed Service for Prometheus.

Conclusion

On this submit, you realized the right way to successfully handle and optimize your EMR Serverless software utilizing a single CloudWatch dashboard with enhanced EMR Serverless metrics. These metrics can be found in all AWS Areas the place EMR Serverless is obtainable. For extra particulars about this characteristic, consult with Job-level monitoring.


In regards to the Authors

Kashif Khan is a Sr. Analytics Specialist Options Architect at AWS, specializing in huge information providers like Amazon EMR, AWS Lake Formation, AWS Glue, Amazon Athena, and Amazon DataZone. With over a decade of expertise within the huge information area, he possesses in depth experience in architecting scalable and strong options. His function includes offering architectural steerage and collaborating intently with clients to design tailor-made options utilizing AWS analytics providers to unlock the complete potential of their information.

Veena Vasudevan is a Principal Accomplice Options Architect and Information & AI specialist at AWS. She helps clients and companions construct extremely optimized, scalable, and safe options; modernize their architectures; and migrate their huge information, analytics, and AI/ML workloads to AWS.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles