10.9 C
United States of America
Thursday, January 30, 2025

How Open Universities Australia modernized their knowledge platform and considerably lowered their ETL prices with AWS Cloud Improvement Package and AWS Step Capabilities


This can be a visitor submit co-authored by Michael Davies from Open Universities Australia.

At Open Universities Australia (OUA), we empower college students to discover an enormous array of levels from famend Australian universities, all delivered via on-line studying. We provide college students various pathways to attain their academic aspirations, offering them with the pliability and accessibility to succeed in their educational objectives. Since our founding in 1993, we now have supported over 500,000 college students to attain their objectives by offering pathways to over 2,600 topics at 25 universities throughout Australia.

As a not-for-profit group, price is a vital consideration for OUA. Whereas reviewing our contract for the third-party software we had been utilizing for our extract, rework, and cargo (ETL) pipelines, we realized that we might replicate a lot of the identical performance utilizing Amazon Internet Companies (AWS) providers reminiscent of AWS Glue, Amazon AppFlow, and AWS Step Capabilities. We additionally acknowledged that we might consolidate our supply code (a lot of which was saved within the ETL software itself) right into a code repository that could possibly be deployed utilizing the AWS Cloud Improvement Package (AWS CDK). By doing so, we had a chance to not solely cut back prices but in addition to boost the visibility and maintainability of our knowledge pipelines.

On this submit, we present you the way we used AWS providers to exchange our current third-party ETL software, enhancing the group’s productiveness and producing a major discount in our ETL operational prices.

Our strategy

The migration initiative consisted of two fundamental components: constructing the brand new structure and migrating knowledge pipelines from the present software to the brand new structure. Typically, we might work on each in parallel, testing one part of the structure whereas creating one other on the similar time.

From early in our migration journey, we started to outline just a few guiding ideas that we might apply all through the event course of. These had been:

  • Easy and modular – Use easy, reusable design patterns with as few transferring components as attainable. Construction the code base to prioritize ease of use for builders.
  • Value-effective – Use assets in an environment friendly, cost-effective approach. Intention to reduce conditions the place assets are operating idly whereas ready for different processes to be accomplished.
  • Enterprise continuity – As a lot as attainable, make use of current code moderately than reinventing the wheel. Roll out updates in levels to reduce potential disruption to current enterprise processes.

Structure overview

The next Diagram 1 is the high-level structure for the answer.

Diagram 1: Total structure of the answer, utilizing AWS Step Capabilities, Amazon Redshift and Amazon S3

The next AWS providers had been used to form our new ETL structure:

  • Amazon Redshift – A completely managed, petabyte-scale knowledge warehouse service within the cloud. Amazon Redshift served as our central knowledge repository, the place we might retailer knowledge, apply transformations, and make knowledge out there to be used in analytics and enterprise intelligence (BI). Word: The provisioned cluster itself was deployed individually from the ETL structure and remained unchanged all through the migration course of.
  • AWS Cloud Improvement Package (AWS CDK) – The AWS Cloud Improvement Package (AWS CDK) is an open-source software program growth framework for outlining cloud infrastructure in code and provisioning it via AWS CloudFormation. Our infrastructure was outlined as code utilizing the AWS CDK. Consequently, we simplified the best way we outlined the assets we needed to deploy whereas utilizing our most popular coding language for growth.
  • AWS Step Capabilities – With AWS Step Capabilities, you may create workflows, additionally referred to as State machines, to construct distributed purposes, automate processes, orchestrate microservices, and create knowledge and machine studying pipelines. AWS Step Capabilities can name over 200 AWS providers together with AWS Glue, AWS Lambda, and Amazon Redshift. We used the AWS Step Perform state machines to outline, orchestrate, and execute our knowledge pipelines.
  • Amazon EventBridge – We used Amazon EventBridge, the serverless occasion bus service, to outline the event-based guidelines and schedules that may set off our AWS Step Capabilities state machines.
  • AWS Glue – A knowledge integration service, AWS Glue consolidates main knowledge integration capabilities right into a single service. These embrace knowledge discovery, trendy ETL, cleaning, remodeling, and centralized cataloging. It’s additionally serverless, which implies there’s no infrastructure to handle. contains the flexibility to run Python scripts. We used it for executing long-running scripts, reminiscent of for ingesting knowledge from an exterior API.
  • AWS Lambda – AWS Lambda is a extremely scalable, serverless compute service. We used it for executing easy scripts, reminiscent of for parsing a single textual content file.
  • Amazon AppFlow – Amazon AppFlow permits easy integration with software program as a service (SaaS) purposes. We used it to outline flows that may periodically load knowledge from chosen operational techniques into our knowledge warehouse.
  • Amazon Easy Storage Service (Amazon S3) – An object storage service providing industry-leading scalability, knowledge availability, safety, and efficiency. Amazon S3 served as our staging space, the place we might retailer uncooked knowledge previous to loading it into different providers reminiscent of Amazon Redshift. We additionally used it as a repository for storing code that could possibly be retrieved and utilized by different providers.

The place sensible, we made use of the file construction of our code base for outlining assets. We arrange our AWS CDK to confer with the contents of a selected listing and outline a useful resource (for instance, an AWS Step Capabilities state machine or an AWS Glue job) for every file it present in that listing. We additionally made use of configuration recordsdata so we might customise the attributes of particular assets as required.

Particulars on particular patterns

Within the above structure Diagram 1, we confirmed a number of flows by which knowledge could possibly be ingested or unloaded from our Amazon Redshift knowledge warehouse. On this part, we spotlight 4 particular patterns in additional element which had been utilized within the remaining answer.

Sample 1: Information transformation, load, and unload

A number of of our knowledge pipelines included important knowledge transformation steps, which had been primarily carried out via SQL statements executed by Amazon Redshift. Others required ingestion or unloading of information from the info warehouse, which could possibly be carried out effectively utilizing COPY or UNLOAD statements executed by Amazon Redshift.

In step with our purpose of utilizing assets effectively, we sought to keep away from operating these statements from throughout the context of an AWS Glue job or AWS Lambda perform as a result of these processes would stay idle whereas ready for the SQL assertion to be accomplished. As an alternative, we opted for an strategy the place SQL execution duties can be orchestrated by an AWS Step Capabilities state machine, which might ship the statements to Amazon Redshift and periodically verify their progress earlier than marking them as both profitable or failed. The next Diagram 2 reveals this workflow.

Data transformation, load, and unload

Diagram 2: Information transformation, load, and unload sample utilizing Amazon Lambda and Amazon Redshift inside an AWS Step Perform

Sample 2: Information replication utilizing AWS Glue

In circumstances the place we would have liked to copy knowledge from a third-party supply, we used AWS Glue to run a script that may question the related API, parse the response, and retailer the related knowledge in Amazon S3. From right here, we used Amazon Redshift to ingest the info utilizing a COPY assertion. The next Diagram 3 reveals this workflow.

Image 3: Copying from external API to Redshift with AWS Glue

Diagram 3: Copying from exterior API to Redshift with AWS Glue

Word: Another choice for this step can be to make use of Amazon Redshift auto-copy, however this wasn’t out there at time of growth.

Sample 3: Information replication utilizing Amazon AppFlow

For sure purposes, we had been in a position to make use of Amazon AppFlow flows instead of AWS Glue jobs. Consequently, we might summary among the complexity of querying exterior APIs instantly. We configured our Amazon AppFlow flows to retailer the output knowledge in Amazon S3, then used an EventBridge rule based mostly on an Finish Circulation Run Report occasion (which is an occasion which is revealed when a move run is full) to set off a load into Amazon Redshift utilizing a COPY assertion. The next Diagram 4 reveals this workflow.

Through the use of Amazon S3 as an intermediate knowledge retailer, we gave ourselves higher management over how the info was processed when it was loaded into Amazon Redshift, when put next with loading the info on to the info warehouse utilizing Amazon AppFlow.

Image 4: Using Amazon AppFlow to integrate external data

Diagram 4: Utilizing Amazon AppFlow to combine exterior knowledge to Amazon S3 and duplicate to Amazon Redshift

Sample 4: Reverse ETL

Though most of our workflows contain knowledge being introduced into the info warehouse from exterior sources, in some circumstances we would have liked the info to be exported to exterior techniques as a substitute. This fashion, we might run SQL queries with advanced logic drawing on a number of knowledge sources and use this logic to assist operational necessities, reminiscent of figuring out which teams of scholars ought to obtain particular communications.

On this move, proven within the following Diagram 5, we begin by operating an UNLOAD assertion in Amazon Redshift to unload the related knowledge to recordsdata in Amazon S3. From right here, every file is processed by an AWS Lambda perform, which performs any obligatory transformations and sends the info to the exterior utility via a number of API calls.

Image 5: Reverse ETL workflow, sending data back out to external data sources

Diagram 5: Reverse ETL workflow, sending knowledge again out to exterior knowledge sources

Outcomes

The re-architecture and migration course of took 5 months to finish, from the preliminary idea to the profitable decommissioning of the earlier third-party software. Many of the architectural effort was accomplished by a single full-time worker, with others on the group primarily helping with the migration of pipelines to the brand new structure.

We achieved important price reductions, with remaining bills on AWS native providers representing solely a small proportion of projected prices in comparison with persevering with with the third-party ETL software. Transferring to a code-based strategy additionally gave us higher visibility of our pipelines and made the method of sustaining them faster and simpler. Total, the transition was seamless for our finish customers, who had been in a position to view the identical knowledge and dashboards each throughout and after the migration, with minimal disruption alongside the best way.

Conclusion

Through the use of the scalability and cost-effectiveness of AWS providers, we had been in a position to optimize our knowledge pipelines, cut back our operational prices, and enhance our agility.

Pete Allen, an analytics engineer from Open Universities Australia, says, “Modernizing our knowledge structure with AWS has been transformative. Transitioning from an exterior platform to an in-house, code-based analytics stack has vastly improved our scalability, flexibility, and efficiency. With AWS, we will now course of and analyze knowledge with a lot quicker turnaround, decrease prices, and better availability, enabling fast growth and deployment of information options, resulting in deeper insights and higher enterprise selections.”

Extra assets


In regards to the Authors

Michael Davies is a Information Engineer at OUA. He has intensive expertise throughout the schooling {industry}, with a specific deal with constructing sturdy and environment friendly knowledge structure and pipelines.

Emma Arrigo is a Options Architect at AWS, specializing in schooling clients throughout Australia. She focuses on leveraging cloud know-how and machine studying to deal with advanced enterprise challenges within the schooling sector. Emma’s ardour for knowledge extends past her skilled life, as evidenced by her canine named Information.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles