|
| 1 | +# Distributed Data Stream Aggregator |
| 2 | + |
| 3 | +This workflow demonstrates how to aggregate data from multiple third-party locations using a distributed processing pattern with AWS Step Functions. The workflow orchestrates data extraction, transformation, and consolidation at scale using Step Functions, DynamoDB, S3, and AWS Glue. |
| 4 | + |
| 5 | +Important: this application uses various AWS services and there are costs associated with these services after the Free Tier usage - please see the [AWS Pricing page](https://aws.amazon.com/pricing/) for details. You are responsible for any AWS costs incurred. No warranty is implied in this example. |
| 6 | + |
| 7 | +## Requirements |
| 8 | + |
| 9 | +* [Create an AWS account](https://portal.aws.amazon.com/gp/aws/developer/registration/index.html) if you do not already have one and log in. The IAM user that you use must have sufficient permissions to make necessary AWS service calls and manage AWS resources. |
| 10 | +* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) installed and configured |
| 11 | +* [Git Installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) |
| 12 | + |
| 13 | +## Deployment Instructions |
| 14 | + |
| 15 | +1. Create a new directory, navigate to that directory in a terminal and clone the GitHub repository: |
| 16 | + ``` |
| 17 | + git clone https://github.com/aws-samples/step-functions-workflows-collection |
| 18 | + ``` |
| 19 | +2. Change directory to the pattern directory: |
| 20 | + ``` |
| 21 | + cd distributed-data-stream-aggregator |
| 22 | + ``` |
| 23 | +3. Create required DynamoDB tables: |
| 24 | + ```bash |
| 25 | + # Create locations table |
| 26 | + aws dynamodb create-table \ |
| 27 | + --table-name locations \ |
| 28 | + --attribute-definitions \ |
| 29 | + AttributeName=task_id,AttributeType=S \ |
| 30 | + --key-schema \ |
| 31 | + AttributeName=task_id,KeyType=HASH \ |
| 32 | + --billing-mode PAY_PER_REQUEST |
| 33 | +
|
| 34 | + # Create task table |
| 35 | + aws dynamodb create-table \ |
| 36 | + --table-name processing-tasks \ |
| 37 | + --attribute-definitions \ |
| 38 | + AttributeName=task_id,AttributeType=S \ |
| 39 | + AttributeName=created_at,AttributeType=S \ |
| 40 | + --key-schema \ |
| 41 | + AttributeName=task_id,KeyType=HASH \ |
| 42 | + AttributeName=created_at,KeyType=RANGE \ |
| 43 | + --billing-mode PAY_PER_REQUEST |
| 44 | + ``` |
| 45 | +
|
| 46 | +4. Create S3 buckets for data processing: |
| 47 | + ```bash |
| 48 | + # Create source bucket for temporary files |
| 49 | + aws s3 mb s3://your-data-processing-bucket |
| 50 | +
|
| 51 | + # Create destination bucket for final output |
| 52 | + aws s3 mb s3://your-output-bucket |
| 53 | + ``` |
| 54 | +
|
| 55 | +5. Create AWS Glue job for data consolidation: |
| 56 | + ```bash |
| 57 | + # Create Glue job (replace with your script location) |
| 58 | + aws glue create-job \ |
| 59 | + --name data-aggregation-job \ |
| 60 | + --role arn:aws:iam::YOUR_ACCOUNT:role/GlueServiceRole \ |
| 61 | + --command Name=glueetl,ScriptLocation=s3://your-bucket/glue-script.py |
| 62 | + ``` |
| 63 | +
|
| 64 | +6. Create HTTP connections for third-party API access: |
| 65 | + ```bash |
| 66 | + # Create EventBridge connection for API access |
| 67 | + aws events create-connection \ |
| 68 | + --name api-connection \ |
| 69 | + --authorization-type API_KEY \ |
| 70 | + --auth-parameters "ApiKeyAuthParameters={ApiKeyName=Authorization,ApiKeyValue=Bearer YOUR_TOKEN}" |
| 71 | + ``` |
| 72 | +
|
| 73 | +7. Deploy the state machines by updating the placeholder values in each ASL file: |
| 74 | + - Replace `'s3-bucket-name'` with your source bucket name |
| 75 | + - Replace `'destination_bucket'` with your destination bucket name |
| 76 | + - Replace `'api_endpoint'` and `'summary_api_endpoint'` with your API URLs |
| 77 | + - Replace `'ConnectionArn'` with your EventBridge connection ARN |
| 78 | + - Replace `'glue_job'` with your Glue job name |
| 79 | + - Replace `'task_table'` with your task table name |
| 80 | + - Replace `'child1'` and `'child2'` with the respective state machine ARNs |
| 81 | +
|
| 82 | +8. Create the state machines: |
| 83 | + ```bash |
| 84 | + # Create main state machine |
| 85 | + aws stepfunctions create-state-machine \ |
| 86 | + --name DistributedDataStreamAggregator \ |
| 87 | + --definition file://statemachine/statemachine.asl.json \ |
| 88 | + --role-arn arn:aws:iam::YOUR_ACCOUNT:role/StepFunctionsExecutionRole |
| 89 | +
|
| 90 | + # Create Data Extraction Child state machine |
| 91 | + aws stepfunctions create-state-machine \ |
| 92 | + --name DataExtractionChild \ |
| 93 | + --definition file://statemachine/data-extraction-child.asl.json \ |
| 94 | + --role-arn arn:aws:iam::YOUR_ACCOUNT:role/StepFunctionsExecutionRole |
| 95 | +
|
| 96 | + # Create Data Processing Child state machine (Express) |
| 97 | + aws stepfunctions create-state-machine \ |
| 98 | + --name DataProcessingChildExpress \ |
| 99 | + --definition file://statemachine/data-processing-child.asl.json \ |
| 100 | + --role-arn arn:aws:iam::YOUR_ACCOUNT:role/StepFunctionsExecutionRole \ |
| 101 | + --type EXPRESS |
| 102 | + ``` |
| 103 | +
|
| 104 | +## How it works |
| 105 | +
|
| 106 | +This distributed data stream aggregator implements a three-tier processing architecture: |
| 107 | +
|
| 108 | +### Main Workflow (Parent State Machine) |
| 109 | +The main workflow accepts a unique task ID and orchestrates the entire data aggregation process. It queries DynamoDB to retrieve various client locations based on the task ID, then uses distributed map iteration to process multiple locations in parallel. Finally, it triggers an AWS Glue job to combine all partial data files and updates the task status in DynamoDB. |
| 110 | +
|
| 111 | +### Child Workflow 1 (Standard Execution) |
| 112 | +This workflow handles data extraction from third-party locations. It pings locations via HTTP endpoints to verify data availability, processes different types of data (failed, rejected) using inline map iteration, and calls the express child workflow with pagination parameters. Extracted data is stored as JSON files in S3 organized by task ID. |
| 113 | +
|
| 114 | +### Child Workflow 2 (Express Execution) |
| 115 | +The express workflow handles the actual API calls to third-party endpoints. It receives location details, data type, and pagination parameters, makes HTTP calls with query parameters, formats the retrieved data into standardized JSON format, and returns results with count and pagination metadata. |
| 116 | +
|
| 117 | +### Data Consolidation |
| 118 | +An AWS Glue job combines all small JSON files from the temporary S3 directory into a single consolidated file, which is uploaded to the destination S3 bucket. The workflow monitors job status and updates the DynamoDB task table upon completion. |
| 119 | +
|
| 120 | +## Image |
| 121 | +
|
| 122 | + |
| 123 | +
|
| 124 | +## Testing |
| 125 | +
|
| 126 | +1. Populate the locations table with test data: |
| 127 | + ```bash |
| 128 | + aws dynamodb put-item \ |
| 129 | + --table-name locations \ |
| 130 | + --item '{"task_id": {"S": "example-task-123"}, "location_id": {"S": "location-001"}, "api_url": {"S": "https://api.example.com"}}' |
| 131 | + ``` |
| 132 | +
|
| 133 | +2. Execute the state machine with the example input: |
| 134 | + ```bash |
| 135 | + aws stepfunctions start-execution \ |
| 136 | + --state-machine-arn arn:aws:states:REGION:ACCOUNT:stateMachine:DistributedDataStreamAggregator \ |
| 137 | + --name test-execution-$(date +%s) \ |
| 138 | + --input file://example-workflow.json |
| 139 | + ``` |
| 140 | +
|
| 141 | +3. Monitor the execution in the AWS Step Functions console or via CLI: |
| 142 | + ```bash |
| 143 | + aws stepfunctions describe-execution \ |
| 144 | + --execution-arn EXECUTION_ARN |
| 145 | + ``` |
| 146 | +
|
| 147 | +4. Verify the results by checking the destination S3 bucket for the aggregated CSV file and the task table for updated status. |
| 148 | +
|
| 149 | +## Cleanup |
| 150 | + |
| 151 | +1. Delete the state machines: |
| 152 | + ```bash |
| 153 | + aws stepfunctions delete-state-machine --state-machine-arn arn:aws:states:REGION:ACCOUNT:stateMachine:DistributedDataStreamAggregator |
| 154 | + aws stepfunctions delete-state-machine --state-machine-arn arn:aws:states:REGION:ACCOUNT:stateMachine:DataExtractionChild |
| 155 | + aws stepfunctions delete-state-machine --state-machine-arn arn:aws:states:REGION:ACCOUNT:stateMachine:DataProcessingChildExpress |
| 156 | + ``` |
| 157 | +
|
| 158 | +2. Delete DynamoDB tables: |
| 159 | + ```bash |
| 160 | + aws dynamodb delete-table --table-name locations |
| 161 | + aws dynamodb delete-table --table-name processing-tasks |
| 162 | + ``` |
| 163 | +
|
| 164 | +3. Delete S3 buckets (ensure they are empty first): |
| 165 | + ```bash |
| 166 | + aws s3 rb s3://your-data-processing-bucket --force |
| 167 | + aws s3 rb s3://your-output-bucket --force |
| 168 | + ``` |
| 169 | +
|
| 170 | +4. Delete Glue job: |
| 171 | + ```bash |
| 172 | + aws glue delete-job --job-name data-aggregation-job |
| 173 | + ``` |
| 174 | +
|
| 175 | +5. Delete EventBridge connection: |
| 176 | + ```bash |
| 177 | + aws events delete-connection --name api-connection |
| 178 | + ``` |
| 179 | +
|
| 180 | +---- |
| 181 | +Copyright 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved. |
| 182 | +
|
| 183 | +SPDX-License-Identifier: MIT-0 |
0 commit comments