You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: Breaking down guides to avoid assuming AWS (#2783)
* docs: Nested AWS into `Authenticating to the Cloud`
* Fix build issues.
* fix: Reworked components page into execution flow page
* docs: Migrating out AWS specific security controls for Pipelines to Account Factory
* docs: Updating `ci-workflows.md` with call outs for Account Factory stuff
* docs: Addressing PR feedback
* docs: Nested AWS into `Authenticating to the Cloud`
* Fix build issues.
* docs: Restructured initial setup to avoid assuming AWS
docs: Splitting up different cloud providers
wip: Progress on stacks
* fix: Fixing the checkbox ids
* docs: Adding AWS docs
* docs: WIP progress on adding Pipelines to an existing repo
* docs: More troubleshooting guidance
* fix: Cutting down on steps for adding a new repo
* fix: Redoing GitLab install instructions for parity with GitHub
* fix: Update extension for `managing-secrets` to `mdx`
* docs: Making it so that managing secrets doesn't assume AWS
* docs: Moving delegated repo setup to Account Factory
* docs: Fixing handling broken IaC
* fix: Resolving merge conflicts
* fix: Avoiding adding whitespace here
---------
Co-authored-by: Josh Padnick <[email protected]>
Copy file name to clipboardExpand all lines: docs/2.0/docs/accountfactory/installation/addingnewrepo.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,4 +48,5 @@ Each of your repositories will contain a Bootstrap Pull Request. Follow the inst
48
48
:::info
49
49
50
50
The bootstrapping pull requests include pre-configured files, such as a `.mise.toml` file that specifies versions of OpenTofu and Terragrunt. Ensure you review and update these configurations to align with your organization's requirements.
Copy file name to clipboardExpand all lines: docs/2.0/docs/pipelines/guides/handling-broken-iac.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Handling Broken Infrastructure as Code
2
2
3
-
When working with Infrastructure as Code (IaC) at scale, you may occasionally encounter broken or invalid configuration files that prevent Terragrunt from successfully running operations. These issues can block entire CI/CD pipeline, preventing even valid infrastructure changes from being deployed.
3
+
When working with Infrastructure as Code (IaC) at scale, you may occasionally encounter broken or invalid configuration files that prevent Terragrunt from successfully running operations. These issues can block the entire CI/CD pipeline, preventing even valid infrastructure changes from being deployed.
4
4
5
5
This guide presents several strategies for handling broken IaC while keeping your pipelines operational.
6
6
@@ -16,13 +16,13 @@ Common causes of broken IaC include:
16
16
- Temporary or experimental code
17
17
- Resources or modules that have are work in progress
18
18
19
-
Depending on the type of run pipeline is executing, broken IaC can fail a pipeline and prevent other, legitimate changes from being deployed. Especially in circumstances where pipelines will trigger a `terragrunt run-all` it is important that all IaC is valid or properly excluded.
19
+
Depending on the type of run pipeline is executing, broken IaC can fail a pipeline and prevent other, legitimate changes from being deployed. Especially in circumstances where pipelines will trigger a `terragrunt run --all` it is important that all IaC is valid or properly excluded.
20
20
21
21
## Resolution Strategies
22
22
23
23
Here are several approaches to manage broken IaC, presented in order of preference:
24
24
25
-
### 1. Fix the Invalid Code (Preferred Solution)
25
+
### Fix the Invalid Code (Preferred Solution)
26
26
27
27
The ideal solution is to fix the underlying issues:
28
28
@@ -41,7 +41,7 @@ git push
41
41
42
42
Then create a merge/pull request to apply the fix to your main branch.
43
43
44
-
### 2. Remove the Invalid IaC
44
+
### Remove the Invalid IaC
45
45
46
46
If you can't fix the issue immediately but the infrastructure is no longer needed, you can remove the problematic code:
If you wish to keep the broken code as is and simply have it ignored by pipelines and Terragrunt, you can use a `.terragrunt-excludes` file to skip problematic units:
61
61
62
-
1.Create a `.terragrunt-excludes` file in the root of your repository:
62
+
Create a `.terragrunt-excludes` file in the root of your repository:
63
63
64
-
```
64
+
```text
65
65
# .terragrunt-excludes
66
66
# One directory per line (no globs)
67
67
account/region/broken-module1
68
68
account/region/broken-module2
69
69
```
70
70
71
-
2.Commit this file to your repository, and Terragrunt will automatically exclude these directories when using `run-all`. Note, if you make a change to the code in those units and pipelines triggers a `run` in that directory itself, then the exclude will not be applied.
71
+
Commit this file to your repository, and Terragrunt will automatically exclude these directories when using `run --all`. Note, if you make a change to the code in those units and pipelines triggers a `run` in that directory itself, then the exclude will not be applied.
72
72
73
-
### 4. Configure Exclusions with Pipelines Environment Variables
73
+
### Configure Exclusions with Pipelines Environment Variables
74
74
75
75
If you don't wish to use `.terragrunt-excludes` in the root of the repository, you can create another file in a different location and set the `TG_QUEUE_EXCLUDES_FILE` environment variable to that path. You then use the Pipelines [`env` block](/2.0/reference/pipelines/configurations-as-code/api#env-block) in your `.gruntwork/pipelines.hcl` configuration to set environment variables that control Terragrunt's behavior:
76
76
@@ -94,14 +94,14 @@ repository {
94
94
When excluding modules, be aware of dependencies:
95
95
96
96
1. If module B depends on module A, and module A is excluded, you may need to exclude module B as well.
97
-
2. Use `terragrunt graph-dependencies` to visualize your dependency tree.
97
+
2. Use `terragrunt dag graph` to visualize your dependency tree.
98
98
99
99
## Best Practices
100
100
101
101
1.**Document exclusions**: Add comments to your `.terragrunt-excludes` file explaining why each directory is excluded.
102
102
2.**Track in issue system**: Create tickets for excluded modules that need to be fixed, including any relevant dates/timelines for when they should be revisited.
103
103
3.**Regular cleanup**: Periodically review and update your excluded directories.
104
-
4.**Validate locally**: Run `terragrunt hcl-validate` or `terragrunt validate` locally before committing changes.
104
+
4.**Validate locally**: Run `terragrunt hclvalidate` or `terragrunt validate` locally before committing changes.
105
105
106
106
## Troubleshooting
107
107
@@ -112,4 +112,4 @@ If you're still experiencing issues after excluding directories:
112
112
- Review pipeline logs to confirm exclusions are being applied
113
113
- Verify you don't have conflicting environment variable settings
114
114
115
-
By implementing these strategies, you can keep your infrastructure pipelines running smoothly while addressing underlying issues in your codebase.
115
+
By implementing these strategies, you can keep your infrastructure pipelines running smoothly while addressing underlying issues in your codebase.
@@ -20,12 +20,19 @@ To interact with the GitLab API, Pipelines requires a Machine User with a [Perso
20
20
</TabItem>
21
21
</Tabs>
22
22
23
-
## Authenticating with AWS
23
+
## Authenticating with Cloud Providers
24
24
25
-
Pipelines requires authentication with AWS but avoids long-lived credentials by utilizing [OIDC](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services). OIDC establishes an authenticated relationship between a specific Git reference in a repository and a corresponding AWS role, enabling Pipelines to assume the role based on where the pipeline is executed.
25
+
Pipelines requires authentication with your cloud provider but avoids long-lived credentials by utilizing OIDC (OpenID Connect). OIDC establishes an authenticated relationship between a specific Git reference in a repository and a corresponding cloud provider identity, enabling Pipelines to assume the identity based on where the pipeline is executed.
26
26
27
-
The role assumption process operates as follows:
27
+
<TabsgroupId="cloud">
28
+
<TabItemvalue="aws"label="AWS"default>
29
+
30
+
{/* We use an h3 here instead of a markdown heading to avoid breaking the ToC */}
31
+
<h3>Authenticating with AWS</h3>
32
+
33
+
Pipelines uses [OIDC to authenticate with AWS](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services), allowing it to assume an AWS IAM role without long-lived credentials.
For more details, see [GitHub's OIDC documentation](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services).
51
+
For more details, see [GitHub's OIDC documentation for AWS](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services).
45
52
46
53
</TabItem>
47
54
<TabItemvalue="gitlab"label="GitLab">
@@ -57,25 +64,77 @@ sequenceDiagram
57
64
AWS STS->>GitLab CI/CD: Temporary AWS Credentials
58
65
```
59
66
60
-
For more details, see [GitLab's OIDC documentation](https://docs.gitlab.com/ee/ci/cloud_services/aws/).
67
+
For more details, see [GitLab's OIDC documentation for AWS](https://docs.gitlab.com/ee/ci/cloud_services/aws/).
61
68
62
69
</TabItem>
63
70
</Tabs>
64
71
65
72
As a result, Pipelines avoids storing long-lived AWS credentials and instead relies on ephemeral credentials generated by AWS STS. These credentials grant least-privilege access to the resources needed for the specific operation being performed (e.g., read access during a pull/merge request open event or write access during a merge).
66
73
74
+
</TabItem>
75
+
<TabItemvalue="azure"label="Azure">
76
+
77
+
{/* We use an h3 here instead of a markdown heading to avoid breaking the ToC */}
78
+
<h3>Authenticating with Azure</h3>
79
+
80
+
Pipelines uses [OIDC to authenticate with Azure](https://learn.microsoft.com/en-us/entra/architecture/auth-oidc), allowing it to obtain access tokens from Entra ID without long-lived credentials.
For more details, see [GitHub's OIDC documentation for Azure](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure).
99
+
100
+
</TabItem>
101
+
<TabItemvalue="gitlab"label="GitLab">
102
+
103
+
```mermaid
104
+
sequenceDiagram
105
+
participant GitLab CI/CD
106
+
participant gitlab.com
107
+
participant Entra ID
108
+
GitLab CI/CD->>gitlab.com: OIDC ID Token Request with preconfigured audience
For more details, see [GitLab's documentation on Azure integration](https://docs.gitlab.com/ee/ci/cloud_services/).
115
+
116
+
</TabItem>
117
+
</Tabs>
118
+
119
+
As a result, Pipelines avoids storing long-lived Azure credentials and instead relies on ephemeral access tokens generated by Entra ID. These tokens grant least-privilege access to the resources needed for the specific operation being performed.
120
+
121
+
</TabItem>
122
+
</Tabs>
123
+
67
124
## Other providers
68
125
69
-
If you are managing configurations for additional services using Infrastructure as Code (IaC) tools like Terragrunt, you may need to configure a provider for those services in Pipelines. In such cases, you must supply the necessary credentials for authenticating with the provider. Whenever possible, follow the same principles applied to AWS: use ephemeral credentials, grant only the minimum permissions required, and avoid storing long-lived credentials on disk.
126
+
If you are managing configurations for additional services using Infrastructure as Code (IaC) tools like Terragrunt, you may need to configure a provider for those services in Pipelines. In such cases, you must supply the necessary credentials for authenticating with the provider. Whenever possible, follow the same principles: use ephemeral credentials, grant only the minimum permissions required, and avoid storing long-lived credentials on disk.
70
127
71
128
### Configuring providers in Terragrunt
72
129
73
130
For example, consider configuring the [Cloudflare Terraform provider](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs). This provider supports multiple authentication methods to enable secure API calls to Cloudflare services. To authenticate with Cloudflare and manage the associated credentials securely, you need to configure your `terragrunt.hcl` file appropriately.
74
131
75
-
First, examine the default AWS authentication provider setup in the root `terragrunt.hcl` file:
132
+
First, examine the default cloud provider authentication setup in the root `root.hcl` file from Gruntwork provided Boilerplate templates:
76
133
134
+
<TabsgroupId="cloud">
135
+
<TabItemvalue="aws"label="AWS"default>
77
136
78
-
```hcl
137
+
```hcl title="root.hcl"
79
138
generate "provider" {
80
139
path = "provider.tf"
81
140
if_exists = "overwrite_terragrunt"
@@ -93,9 +152,29 @@ EOF
93
152
}
94
153
```
95
154
96
-
This provider block is dynamically generated during the execution of any `terragrunt` command and supplies the AWS provider with the required configuration to discover credentials made available by the pipelines.
155
+
This provider block (the value of `contents`) is dynamically generated as the file `provider.tf`during the execution of any `terragrunt` command and supplies the OpenTofu/Terraform AWS provider with the required configuration to discover credentials made available by the pipelines.
97
156
98
-
With this approach, no secrets are written to disk. Instead, the AWS provider dynamically retrieves secrets at runtime.
157
+
</TabItem>
158
+
<TabItemvalue="azure"label="Azure">
159
+
160
+
```hcl
161
+
generate "provider" {
162
+
path = "provider.tf"
163
+
if_exists = "overwrite_terragrunt"
164
+
contents = <<EOF
165
+
provider "azurerm" {
166
+
features {}
167
+
}
168
+
EOF
169
+
}
170
+
```
171
+
172
+
This provider block (the value of `contents`) is dynamically generated as the file `provider.tf` during the execution of any `terragrunt` command and supplies the OpenTofu/Terraform Azure provider with the required configuration to discover credentials made available by the pipelines.
173
+
174
+
</TabItem>
175
+
</Tabs>
176
+
177
+
With this approach, no secrets are written to disk. Instead, the cloud provider dynamically retrieves secrets at runtime.
99
178
100
179
According to the Cloudflare documentation, the Cloudflare provider supports several authentication methods. One option involves using the [api_token](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs#api_key) field in the `provider` block, as illustrated in the documentation:
101
180
@@ -126,25 +205,42 @@ In this context, `fetch-cloudflare-api-token.sh` is a script designed to retriev
126
205
127
206
You are free to use any method to fetch the secret, provided it outputs the value to stdout.
128
207
129
-
Here are two straightforward examples of how you might fetch the secret:
208
+
Here are straightforward examples of how you might fetch the secret based on your cloud provider:
209
+
210
+
<TabsgroupId="cloud">
211
+
<TabItemvalue="aws"label="AWS"default>
212
+
213
+
Using AWS Secrets Manager:
214
+
215
+
```bash
216
+
aws secretsmanager get-secret-value --secret-id cloudflare-api-token --query SecretString --output text
217
+
```
218
+
219
+
Using AWS SSM Parameter Store:
220
+
221
+
```bash
222
+
aws ssm get-parameter --name cloudflare-api-token --query Parameter.Value --output text --with-decryption
223
+
```
224
+
225
+
Given that Pipelines is already authenticated with AWS for interacting with state, this setup provides a convenient method for retrieving secrets.
130
226
131
-
1. Using `aws secretsmanager`:
227
+
</TabItem>
228
+
<TabItemvalue="azure"label="Azure">
132
229
133
-
```bash
134
-
aws secretsmanager get-secret-value --secret-id cloudflare-api-token --query SecretString --output text
135
-
```
230
+
Using Azure Key Vault:
136
231
137
-
2. Using `aws ssm`:
232
+
```bash
233
+
az keyvault secret show --vault-name <your-vault-name> --name cloudflare-api-token --query value --output tsv
234
+
```
138
235
139
-
```bash
140
-
aws ssm get-parameter --name cloudflare-api-token --query Parameter.Value --output text --with-decryption
141
-
```
236
+
Given that Pipelines is already authenticated with Azure for interacting with state, this setup provides a convenient method for retrieving secrets.
142
237
143
-
Given that Pipelines is already authenticated with AWS for interacting with state, this setup provides a convenient method for retrieving the Cloudflare API token.
238
+
</TabItem>
239
+
</Tabs>
144
240
145
241
:::
146
242
147
-
Alternatively, note that the `api_token` field is optional. Similar to the AWS provider, you can use the `CLOUDFLARE_API_TOKEN` environment variable to supply the API token to the provider at runtime.
243
+
Alternatively, note that the `api_token` field is optional. Similar to cloud provider authentication, you can use the `CLOUDFLARE_API_TOKEN` environment variable to supply the API token to the provider at runtime.
148
244
149
245
To achieve this, you can update the `provider` block as follows:
150
246
@@ -172,6 +268,7 @@ terraform {
172
268
}
173
269
}
174
270
```
271
+
175
272
### Managing secrets
176
273
177
274
When configuring providers and Pipelines, it's important to store secrets in a secure and accessible location. Several options are available for managing secrets, each with its advantages and trade-offs.
@@ -211,33 +308,62 @@ GitLab CI/CD Variables provide a native way to store secrets for your pipelines.
211
308
</TabItem>
212
309
</Tabs>
213
310
214
-
#### AWS Secrets Manager
311
+
#### Cloud Provider Secret Stores
312
+
313
+
Cloud providers offer dedicated secret management services with advanced features and security controls.
314
+
315
+
<TabsgroupId="cloud">
316
+
<TabItemvalue="aws"label="AWS"default>
317
+
318
+
**AWS Secrets Manager**
215
319
216
320
AWS Secrets Manager offers a sophisticated solution for managing secrets. It allows for provisioning secrets in AWS and configuring fine-grained access controls through AWS IAM. It also supports advanced features like secret rotation and access auditing.
217
321
218
322
**Advantages**:
219
-
- Granular access permissions, ensuring secrets are only accessible when required.
220
-
- Support for automated secret rotation and detailed access auditing.
323
+
- Granular access permissions, ensuring secrets are only accessible when required
324
+
- Support for automated secret rotation and detailed access auditing
221
325
222
326
**Trade-offs**:
223
-
- Increased complexity in setup and management.
224
-
- Potentially higher costs associated with its use.
327
+
- Increased complexity in setup and management
328
+
- Potentially higher costs associated with its use
225
329
226
330
Refer to the [AWS Secrets Manager documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) for further details.
227
331
228
-
#### AWS SSM Parameter Store
332
+
**AWS SSM Parameter Store**
229
333
230
334
AWS SSM Parameter Store is a simpler and more cost-effective alternative to Secrets Manager. It supports secret storage and access control through AWS IAM, providing a basic solution for managing sensitive data.
231
335
232
336
**Advantages**:
233
-
- Lower cost compared to Secrets Manager.
234
-
- Granular access control similar to Secrets Manager.
337
+
- Lower cost compared to Secrets Manager
338
+
- Granular access control similar to Secrets Manager
235
339
236
340
**Trade-offs**:
237
-
- Limited functionality compared to Secrets Manager, such as less robust secret rotation capabilities.
341
+
- Limited functionality compared to Secrets Manager, such as less robust secret rotation capabilities
238
342
239
343
Refer to the [AWS SSM Parameter Store documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) for additional information.
240
344
345
+
</TabItem>
346
+
<TabItemvalue="azure"label="Azure">
347
+
348
+
**Azure Key Vault**
349
+
350
+
Azure Key Vault provides a comprehensive solution for managing secrets, keys, and certificates. It offers fine-grained access controls through Azure RBAC and supports advanced features like secret versioning and access auditing.
351
+
352
+
**Advantages**:
353
+
- Granular access permissions with Azure RBAC and access policies
354
+
- Support for secret versioning, soft-delete, and purge protection
355
+
- Integration with Azure Monitor for detailed audit logs
356
+
- Hardware Security Module (HSM) backed options for enhanced security
357
+
358
+
**Trade-offs**:
359
+
- Additional setup complexity for RBAC and access policies
360
+
- Costs associated with transactions and HSM-backed vaults
361
+
362
+
Refer to the [Azure Key Vault documentation](https://learn.microsoft.com/en-us/azure/key-vault/general/overview) for further details.
363
+
364
+
</TabItem>
365
+
</Tabs>
366
+
241
367
#### Deciding on a secret store
242
368
243
369
When selecting a secret store, consider the following key factors:
0 commit comments