-0.8 C
United States of America
Monday, December 2, 2024

Speed up Amazon Redshift Information Lake queries with AWS Glue Information Catalog Column Statistics


Amazon Redshift lets you effectively question and retrieve structured and semi-structured knowledge fromĀ open format information in Amazon S3 knowledge lake with out having to load the information into Amazon Redshift tables. Amazon Redshift extends SQL capabilities to your knowledge lake, enabling you to run analytical queries. Amazon Redshift helps all kinds of tabular knowledge codecs like CSV, JSON, Parquet, ORCĀ and open tabular codecs like Apache Hudi, Linux basis DeltaĀ Lake and Apache Iceberg.

You create Redshift exterior tables by defining the construction to your information, S3 location of the information and registering them as tables in an exterior knowledge catalog. The exterior knowledge catalog might be AWS Glue Information Catalog, the information catalog that comes with Amazon Athena, or your individual Apache Hive metastore.

Over the past yr, Amazon Redshift added a number of efficiency optimizations for knowledge lake queries throughout a number of areas of question engine reminiscent of rewrite, planning, scan execution and consuming AWS Glue Information Catalog column statistics. To get the very best efficiency on knowledge lake queries with Redshift, you need to use AWS Glue Information Catalogā€™s column statistics characteristicĀ to gather statistics on InformationĀ Lake tables. For Amazon Redshift Serverless situations, you will notice improved scan efficiency by means of elevated parallel processing of S3 information and this occurs routinely based mostly on RPUs used.

On this submit, we spotlight the efficiency enhancements we noticed utilizing {industry} commonplace TPC-DS benchmarks. Total execution time of TPC-DS 3 TB benchmark improved by 3x. A number of the queries in our benchmark skilled as much as 12x velocity up.

Efficiency Enhancements

A number of efficiency optimizations have been achieved over the past yr to enhance efficiency of information lake queries together with the next.

  • Devour AWS Glue Information Catalog column statistics and tuning of Redshift optimizer to enhance high quality of question plans
  • Make the most of bloom filters for partition columns
  • Improved scan effectivity for Amazon Redshift Serverless situations by means of elevated parallel processing of information
  • Novel question rewrite guidelines to merge comparable scans
  • Sooner retrieval of metadata from AWS Glue Information Catalog

To grasp the efficiency positive factors, we examined the efficiency on the industry-standard TPC-DS benchmark utilizing 3 TB knowledge units and queries which represents completely different buyer use circumstances.Ā Efficiency was examined on a Redshift serverless knowledge warehouse with 128 RPU. In our testing, the dataset was saved in Amazon S3 in Parquet format and AWS Glue Information Catalog was used to handle exterior databases and tables. Reality tables have been partitioned on the date column, and every reality desk consisted of roughly 2,000 partitions. All the tables had their row rely desk property, numRows, set as per the spectrum question efficiency pointers.

We did a baseline run on Redshift patch model (patch 172) from final yr. Later, we ran all TPC-DS queries on newest patch model (patch 180) that features all efficiency optimizations added over final yr. Then we used AWS Glue Information Catalogā€™s column statistics characteristic to compute statistics for all of the tables and measured enhancements with the presence of AWS Glue Information Catalog column statistics.

Our evaluation revealed that the TPC-DS 3TB Parquet benchmark noticed substantial efficiency positive factors with these optimizations. Particularly, partitioned Parquet with our newest optimizations achieved 2x quicker runtimes in comparison with the earlier implementation. Enabling AWS Glue Information Catalog column statistics additional improved efficiency by 3x versus final yr. The next graph illustrates these runtime enhancements for the complete benchmark (all TPC-DS queries) over the previous yr, together with the extra enhance from utilizing AWS Glue Information Catalog column statistics.

Improvement in total runtime of TPC-DS 3T workload

Determine 1: Enchancment in whole runtime of TPC-DS 3T workload

The next graph presents the highest queries from the TPC-DS benchmark with the best efficiency enchancment over the past yr with and with out AWS Glue Information Catalog column statistics. You may see that efficiency improves rather a lot when statistics exist on AWS Glue Information Catalog (for particulars on tips on how to get statistics to your Information Lake tables, please check with optimizing question efficiency utilizing AWS Glue Information Catalog column statistics). Particularly, multi-join queries will profit probably the most from AWS Glue Information Catalog column statistics as a result of the optimizer makes use of statistics to decide on the appropriate be a part of order and distribution technique.

Speed-up in TPC-DS queries

Determine 2: Pace-up in TPC-DS queries

Letā€™s focus on a number of the optimizations that contributed to improved question efficiency.

Optimizing with table-level statistics

Amazon Redshiftā€™s design permits it to deal with large-scale knowledge challenges with superior velocity and cost-efficiency. Its massively parallel processing (MPP) question engine, AI-powered question optimizer, auto-scaling capabilities, and different superior options enable Redshift to excel at looking, aggregating, and reworking petabytes of information.

Nonetheless, even probably the most highly effective techniques can expertise efficiency degradation in the event that they encounter anti-patterns like grossly inaccurate desk statistics, such because the row rely metadata.

With out this significant metadata, Redshiftā€™s question optimizer could also be restricted within the variety of potential optimizations, particularly these associated to knowledge distribution throughout question execution. This may have a big influence on general question efficiency.

As an instance this, take into account the next easy question involving an internal be a part of between a big desk with billions of rows and a small desk with only some hundred thousand rows.

choose small_table.sellerid, sum(large_table.qtysold)
from large_table, small_table
the place large_table.salesid = small_table.listid
 and small_table.listtime > '2023-12-01'
 and large_table.saletime > '2023-12-01'
group by 1 order by 1

If executed as-is, with the massive desk on the right-hand aspect of the be a part of, the question will result in sub-optimal efficiency. It is because the massive desk will have to be distributed (broadcast) to all Redshift compute nodes to carry out the internal be a part of with the small desk, as proven within the following diagram.

Inaccurate table statistics lead to limited optimizations and large amounts of data broadcast among compute nodes for a simple inner join

Determine 3: Inaccurate desk statistics result in restricted optimizations and enormous quantities of information broadcast amongst compute nodes for a easy internal be a part of

Now, take into account a state of affairs the place the desk statistics, such because the row rely, are correct. This permits the Amazon Redshift question optimizer to make extra knowledgeable choices, reminiscent of figuring out the optimum be a part of order. On this case, the optimizer would instantly rewrite the question to have the massive desk on the left-hand aspect of the internal be a part of, so that it’s the small desk that’s broadcast throughout the Redshift compute nodes, as illustrated within the following diagram.

Accurate table statistics lead to high degree of optimizations and very little data broadcast among compute nodes for a simple inner join

Determine 4: Correct desk statistics result in excessive diploma of optimizations and little or no knowledge broadcast amongst compute nodes for a easy internal be a part of

Luckily, Amazon Redshift routinely maintains correct desk statistics for native tables by operating the ANALYZE command within the background. For exterior tables (knowledge lake tables), nonetheless, AWS Glue Information Catalog column statistics are beneficial to be used with Amazon Redshift as we’ll focus on within the subsequent part. For extra normal data on optimizing queries in Amazon Redshift, please check with the documentation on elements affecting question efficiency, knowledge redistribution, and Amazon Redshift finest practices for designing queries.

Enhancements with AWS Glue Information Catalog column statistics

AWS Glue Information Catalog has a characteristic to compute column degree statistics for Amazon S3 backed exterior tables. AWS Glue Information Catalog can compute column degree statistics reminiscent of NDV, Variety of Nulls, Min/Max and Avg. column width for the columns with out the necessity for extra knowledge pipelines. Amazon Redshift cost-based optimizer makes use of these statistics to give you higher high quality question plans. Along with consuming statistics, we additionally made a number of enhancements in cardinality estimations and value tuning to get prime quality question plans thereby enhancing question efficiency.

TPC-DS 3TB dataset confirmed 40% enchancment in whole question execution time when these AWS Glue Information Catalog column statistics have been supplied. Particular person TPC-DS queries confirmed as much as 5x enhancements in question execution time. A number of the queries that had higher influence in execution time are Q85, Q64, Q75, Q78, Q94, Q16, Q04, Q24 and Q11.

We’ll undergo an instance the place cost-based optimizer generated a greater question plan with statistics and the way it improved the execution time.

Letā€™s take into account following easier model of TPC-DS Q64 to showcase the question plan variations with statistics.

choose i_product_name product_name
,i_item_sk item_sk
,ad1.ca_street_number b_street_number
,ad1.ca_street_name b_street_name
,ad1.ca_city b_city
,ad1.ca_zip b_zip
,d1.d_year as syear
,rely(*) cnt
,sum(ss_wholesale_cost) s1
,sum(ss_list_price) s2
,sum(ss_coupon_amt) s3
FROM Ā  tpcds_3t_alls3_pp_ext.store_sales
,tpcds_3t_alls3_pp_ext.store_returns
,tpcds_3t_alls3_pp_ext.date_dim d1
,tpcds_3t_alls3_pp_ext.buyer
,tpcds_3t_alls3_pp_ext.customer_address ad1
,tpcds_3t_alls3_pp_ext.merchandise
WHERE
ss_sold_date_sk = d1.d_date_sk AND
ss_customer_sk = c_customer_sk AND

ss_addr_sk = ad1.ca_address_sk and
ss_item_sk = i_item_sk and
ss_item_sk = sr_item_sk and
ss_ticket_number = sr_ticket_number and
i_color in ('firebrick','papaya','orange','cream','turquoise','deep') and
i_current_price between 42 and 42 + 10 and
i_current_price between 42 + 1 and 42 + 15
group by i_product_name
,i_item_sk
,ad1.ca_street_number
,ad1.ca_street_name
,ad1.ca_city
,ad1.ca_zip
,d1.d_year

With out Statistics

Following determine represents the logical question plan of Q64. You may observe that cardinality estimation of joins just isn’t correct. With inaccurate cardinalities, optimizer produces a sub-optimalĀ question plan resulting in greater execution time.

With Statistics

Following determine represents the logical question plan after consuming AWS Glue Information Catalog column statistics. Primarily based on the highlighted adjustments, you may observe that the cardinality estimations of JOIN improved by many magnitudes serving to the optimizer to decide on a greater be a part of order and be a part of technique (broadcast DS_BCAST_INNER vs. distribute DS_DIST_BOTH). Switching the customer_address and buyer desk from internal to outer desk and making be a part of methods as distribute has main influence as a result of this reduces the information motion between the nodes and avoids spilling from hash desk.

Logical query plan of Q64 without statistics

Determine 5: Logical question plan of Q64 with out statistics

Logical query plan of Q64 after consuming column-level statistics

Determine 6: Logical question plan of Q64 after consuming AWS Glue Information Catalog column statistics

This transformation in question plan improved the question execution time of Q64 from 383s to 81s.

Given the higher advantages with AWS Glue Information Catalog column statistics for the optimizer, you need to take into account amassing stats to your knowledge lake utilizing AWS Glue. In case your workload is a JOIN heavy workload, then amassing stats will present higher enchancment in your workload. Check with producing AWS Glue Information Catalog column statisticsĀ for directions on tips on how to accumulate statistics in AWS Glue Information Catalog.

Question rewrite optimization

We launched a brand new question rewrite rule which mixes scalar aggregates over the identical widespread expression utilizing barely completely different predicates. This rewrite resulted in efficiency enhancements on TPC-DS queries Q09, Q28, and Q88. Letā€™s give attention to Q09 as a consultant of those queries, given by the next fragment:

SELECT CASE
WHEN (SELECT COUNT(*)
FROM store_sales
WHERE ss_quantity BETWEEN 1 AND 20) > 48409437
THEN (SELECT AVG(ss_ext_discount_amt)
FROM store_sales
WHERE ss_quantity BETWEEN 1 AND 20)
ELSE (SELECT AVG(ss_net_profit)
FROM store_sales
WHERE ss_quantity BETWEEN 1 AND 20) END
AS bucket1,
<<4 extra variations of the CASE expression above>>
FROM motive
WHERE r_reason_sk = 1

In whole, there are 15 scans of the actual fact desk store_sales, every one returning varied aggregates over completely different subsets of information. The engine first performs subquery elimination and transforms the varied expressions within the CASE statements into relational subtrees related by way of cross merchandise, after which they’re fused into one subquery dealing with all scalar aggregates. The ensuing plan for Q09, described under utilizing SQL for readability, is given by:

SELECT CASE WHEN v1 > 48409437 THEN t1 ELSE e1 END,
<4 extra variations>
FROM (SELECT COUNT(CASE WHEN b1 THEN 1Ā END) AS v1,
AVG(CASE WHEN b1 THENĀ ss_ext_discount_amt END) AS t1,
AVG(CASE WHEN b1 THEN ss_net_profit END) AS e1,
<4 extra variations>
FROM motive,
(SELECT *,
ss_quantity BETWEENĀ 1Ā ANDĀ 20Ā ASĀ b1,
<4 extra variations>
FROM store_sales
WHERE ss_quantity BETWEENĀ 1Ā ANDĀ 20Ā OR
<4 extra variations>))
WHERE r_reason_sk = 1)

Generally, this rewriteĀ rule leads to the biggest enhancements each in latency (from 3x to 8x enhancements) and bytes learn from Amazon S3 (from 6x to 8x discount in scanned bytes and, consequently, price).

Bloom filter for partition columns

Amazon Redshift already makes use of Bloom filters on knowledge columns of exterior tables in Amazon S3 to allow early and efficient knowledge filtering. Final yr, we prolonged this help for partition columns as nicely. A Bloom filter is a probabilistic, memory-efficient knowledge construction that accelerates be a part of queries at scale by filtering rows that don’t match the be a part of relation, considerably lowering the quantity of information transferred over the community. Amazon Redshift routinely determines what queries are appropriate for leveraging Bloom filters at question runtime.

This optimization resulted in efficiency enhancements on TPC-DS queries Q05, Q17 and Q54. This optimization resulted in massive enhancements in each latency (from 2x to 3x enchancment) and bytes learn from S3 (from 9x to 15x discount in scanned bytes and, consequently price).

Following is the subquery of Q05 which showcased enhancements with runtime filter.

choose s_store_id,
sum(sales_price) as gross sales,
sum(revenue) as revenue,
sum(return_amt) as returns,
sum(net_loss) as profit_loss
from
( choose Ā ss_store_sk as store_sk,
ss_sold_date_sk Ā as date_sk,
ss_ext_sales_price as sales_price,
ss_net_profit as revenue,
solid(0 as decimal(7,2)) as return_amt,
solid(0 as decimal(7,2)) as net_loss
from tpcds_3t_alls3_pp_ext.store_sales
union all
choose sr_store_sk as store_sk,
sr_returned_date_sk as date_sk,
solid(0 as decimal(7,2)) as sales_price,
solid(0 as decimal(7,2)) as revenue,
sr_return_amt as return_amt,
sr_net_loss as net_loss
from tpcds_3t_alls3_pp_ext.store_returns
) salesreturnss,
tpcds_3t_alls3_pp_ext.date_dim,
tpcds_3t_alls3_pp_ext.retailer
the place date_sk = d_date_sk
and d_date between solid('1998-08-13' as date)
and (solid('1998-08-13' as date) + Ā 14)
and store_sk = s_store_sk
group by s_store_id

With out bloom filter help on partition columns

Following determine is the logical question plan for sub-query of Q05. This appends two massive reality tables store_sales (8B rows) and store_returns (863M rows) after which joins with very selective dimension tables date_dim after which with dimension desk retailer. You may observe that be a part of with date_dim desk reduces the variety of rows from 9B to 93M rows.

With bloom filter help on partition columns

With help of bloom filter on partition columns, we now create bloom filter for d_date_sk column of date_dim desk and push down the bloom filters to store_sales and store_returns desk. These bloom filters assist to filter out the partitions in each store_sales and store_returns desk as a result of be a part of occurs on partition column (variety of partitions processed reduces by 10x).

Logical query plan for sub-query of Q05 without bloom filter support on partition columns

Determine 7: Logical question plan for sub-query of Q05 with out bloom filter help on partition columns

Logical query plan for sub-query of Q05 with bloom filter support on partition columns

Determine 8: Logical question plan for sub-query of Q05 with bloom filter help on partition columns

Total, bloom filter on partition column will cut back the variety of partitions processed leading to diminished S3 itemizing calls and lesser variety of knowledge information to be learn (discount in scanned bytes). You may see that we solely scan 89M rows from store_sales and 4M rows from store_returns due to the bloom filter. This diminished variety of rows to course of at JOIN degree and helped in enhancing the general question efficiency by 2x and scanned bytes by 9x.

Conclusion

On this submit, we lined new efficiency optimizations in Amazon Redshift knowledge lake question processing and the way AWS Glue Information Catalog statistics helps to boost high quality of question plans for knowledge lake queries in Amazon Redshift. These optimizations collectively improved TPC-DS 3 TB benchmark by 3x. A number of the queries in our benchmark benefited as much as 12x velocity up.

In abstract, Amazon Redshift now presents enhanced question efficiency with optimizations reminiscent of AWS Glue Information Catalog column statistics, bloom filters on partition columns, new question rewrite guidelines and quicker retrieval of metadata. These optimizations are enabled by default and Amazon Redshift customers will profit with higher question response occasions for his or her workloads. For extra data, please attain out to your AWS technical account supervisor or AWS account options architect. They are going to be blissful to offer further steerage and help.


In regards to the authors

Kalaiselvi Kamaraj is aĀ Sr. Software program Improvement Engineer with Amazon. She has labored on a number of tasks inside Redshift Question processing crew and at present specializing in efficiency associated tasks for Redshift Information Lake.

Mark Lyons is a Principal Product Supervisor on the Amazon Redshift crew. He works on the intersection of information lakes and knowledge warehouses. Previous to becoming a member of AWS, Mark held product management roles with Dremio and Vertica. He’s captivated with knowledge analytics and empowering prospects to vary the world with their knowledge.

Asser Moustafa is a Principal Worldwide Specialist Options Architect at AWS, based mostly in Dallas, Texas, USA. He companions with prospects worldwide, advising them on all facets of their knowledge architectures, migrations, and strategic knowledge visions to assist organizations undertake cloud-based options, maximize the worth of their knowledge belongings, modernize legacy infrastructures, and implement cutting-edge capabilities like machine studying and superior analytics. Previous to becoming a member of AWS, Asser held varied knowledge and analytics management roles, finishing an MBA from New York College and an MS in Laptop Science from Columbia College in New York. He’s captivated with empowering organizations to change into really data-driven and unlock the transformative potential of their knowledge.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles