ad4469dc-7beb-4b7f-90b1-7de.../docs/03_remote_state_s3_dynamodb.md

111 lines
3.6 KiB
Markdown

# 3) Remote State with S3 + DynamoDB Lock (25 min)
By default, Terraform stores its state in a local file called **terraform.tfstate**.
This works fine for single-user setups, but in a team environment it can cause conflicts (two people updating at once).
To solve this, we move the state file to a **remote backend** (Amazon S3) and use **DynamoDB** for state locking.
This ensures only one person can apply changes at a time.
## Step A: Create an S3 bucket for remote state
```bash
aws s3api create-bucket --bucket my-terraform-state-lab --region ap-south-1
```
- Creates a new S3 bucket called `my-terraform-state-lab`.
- The bucket name must be **globally unique** across AWS. Change it to something like `terraform-state-yourname123`.
```bash
aws s3api put-bucket-versioning --bucket my-terraform-state-lab --versioning-configuration Status=Enabled
```
- Enables **versioning** on the bucket.
- This allows you to roll back to older state files if something goes wrong.
## Step B: Create a DynamoDB table for state locking
```bash
aws dynamodb create-table --table-name terraform-locks --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --billing-mode PAY_PER_REQUEST
```
- Creates a DynamoDB table called `terraform-locks`.
- The `LockID` column acts as a **lock key**.
- When Terraform runs, it inserts a lock entry in this table. This prevents two users from applying changes at the same time.
## Step C: Configure backend in Terraform
Create a new project folder and move into it:
```bash
mkdir -p ~/terraform-remote-lab && cd ~/terraform-remote-lab
```
Now create a file called **main.tf**:
```hcl
terraform {
backend "s3" {
bucket = "my-terraform-state-lab" # replace with your bucket name
key = "dev/terraform.tfstate" # file path inside the bucket
region = "ap-south-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
provider "aws" {
region = "ap-south-1"
}
resource "aws_s3_bucket" "demo" {
bucket = "my-demo-bucket-student12345" # must be unique
acl = "private"
}
```
### Explanation of the configuration
- **backend "s3"** → Tells Terraform to store state in S3.
- `bucket`: Your S3 bucket name.
- `key`: Path/name of the state file inside the bucket.
- `region`: Region of your bucket.
- `dynamodb_table`: Table used for locks.
- `encrypt`: Ensures the state file is encrypted at rest.
- **provider "aws"** → Tells Terraform to use AWS as the provider.
- **resource "aws_s3_bucket" "demo"** → A sample resource (an S3 bucket) to test remote state functionality.
## Step D: Initialize the backend
```bash
terraform init
```
- Initializes the working directory.
- Terraform will detect the `backend "s3"` block and migrate your local state to S3.
- You may be asked: *Do you want to copy existing state to the new backend?* → type `yes`.
## Step E: Apply and verify
```bash
terraform apply -auto-approve
```
- Creates the demo bucket defined in `main.tf`.
- Stores the state file in **S3** instead of locally.
- Uses **DynamoDB** to prevent parallel execution.
Now check in AWS Console:
- Go to **S3 → your bucket → dev/terraform.tfstate** → you should see the state file.
- Go to **DynamoDB → terraform-locks** → while Terraform runs, a lock record will appear.
## Wrap-Up
You have now:
- Configured Terraform to use a **remote state** in S3.
- Added **versioning** for rollback.
- Used **DynamoDB locking** for safe team collaboration.
This setup is considered a best practice for production environments.