This commit is contained in:
labzy-handson 2025-09-19 09:07:57 +05:30
parent 2ca90ef516
commit 69ce277f98
13 changed files with 939 additions and 16 deletions

5
LICENSE.md Normal file
View File

@ -0,0 +1,5 @@
# Labzy Labs — Source-Available Dual License
**Code:** Business Source License 1.1 (BUSL-1.1)
**Documentation & Media:** Creative Commons AttributionNonCommercialNoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Copyright (c) 2025 Meyi Cloud Solutions Pvt. Ltd.

View File

@ -1 +1,83 @@
# Overview
# Terraform Instruction Lab From Zero to Team Collaboration (2 Hours)
This hands-on lab takes you from first contact with Terraform to a safe, team-ready workflow on AWS. You will provision real infrastructure, structure your code with variables and modules, and protect your state with an S3 backend and DynamoDB state locking. The material is paced for beginners and practical for engineers who want a concise, endtoend setup they can reuse at work.
**Audience:** Beginners using Ubuntu 24 for the first time
**Goal:** Deploy AWS resources using Terraform, adopt variables/outputs and modules, and enable remote state + locking for collaboration.
## Outcomes
- Understand what Terraform is and how it manages infrastructure as code (IaC)
- Install and verify Terraform + AWS CLI on Ubuntu 24
- Authenticate to AWS and validate IAM access
- Write a minimal Terraform configuration and deploy an EC2 instance
- Introduce variables, tfvars, and outputs for reuse and clarity
- Configure remote state in S3 with DynamoDB locking to prevent race conditions
- Compose and reuse modules (including a simple nested VPC + EC2 example)
- Clean up all resources to avoid unnecessary AWS costs
## What Well Build
- A small, cost-conscious AWS stack:
- Optional VPC with public subnet and Internet gateway
- A single EC2 instance (Free Tier eligible type where possible)
- Remote Terraform state stored in S3 with DynamoDB table for locking
- A reusable module layout you can extend for real projects
## Prerequisites
- AWS account (Free Tier is fine) and basic familiarity with regions
- IAM user or role with permissions for EC2, S3, and DynamoDB
- Ubuntu 24.04 machine (VM, physical, or WSL) with Internet access
- Willingness to use the terminal (copy/paste is fine!)
> Tip: New to AWS CLI? No problem—setup is guided and verified in this lab.
## Lab Roadmap
- `docs/01_install_setup.md`: Install Terraform + AWS CLI and verify environment
- `docs/02_first_ec2.md`: Author your first Terraform config and deploy EC2
- `docs/03_remote_state_s3_dynamodb.md`: Configure S3 backend and DynamoDB locking
- `docs/04_variables_tfvars_outputs.md`: Introduce variables, tfvars, and outputs
- `docs/05_modules_reuse.md`: Create and consume modules; structure for reuse
- `docs/06_nested_modules_vpc_ec2.md`: Model a simple VPC + EC2 with nested modules
- `docs/07_cleanup.md`: Destroy resources and verify nothing is left behind
- `docs/08_test.md`: Optional checks and validation ideas
## Estimated Time (2 Hours)
- Setup and verification: 1520 min
- First EC2 with basics: 1520 min
- Remote state + locking: 2025 min
- Variables, tfvars, outputs: 1520 min
- Modules + nested example: 2530 min
- Cleanup and wrapup: 10 min
## Key Concepts
- Declarative IaC: Describe desired state; Terraform plans and applies changes
- State: Terraform tracks real resources; protect it with remote storage + locks
- Idempotence: Reruns converge to the same outcome when code is unchanged
- Modules: Encapsulate patterns, promote reuse and reviewability
- Collaboration: S3 state + DynamoDB locks prevent conflicting applies
## Safety, Cost, and Region
- Choose a region close to you and consistent across the lab (e.g., `us-east-1`)
- Prefer Free Tier eligible instance types (e.g., `t2.micro` or `t3.micro`)
- Always run the cleanup step in `docs/07_cleanup.md` after experimenting
- Remote state resources (S3 bucket, DynamoDB table) have minimal ongoing cost
## Tools Youll Use
- Terraform CLI (1.6+ recommended)
- AWS CLI v2
- A text editor and terminal (bash/zsh)
## Deliverables
- A working Terraform project that can:
- Deploy a basic EC2 instance (optionally inside a simple VPC)
- Output connection details
- Store state in S3 with DynamoDB locking for team safety
- A module structure you can clone for future services
## Troubleshooting and Help
- Use `terraform init -upgrade` if providers appear outdated
- Validate AWS credentials with `aws sts get-caller-identity`
- Run `terraform plan` to preview changes before apply
- If a lock persists, check and clear it via the DynamoDB console (only if safe)
- See `docs/08_test.md` for additional verification ideas
When youre ready, start with installation in `docs/01_install_setup.md`.

81
docs/01_install_setup.md Normal file
View File

@ -0,0 +1,81 @@
# 1) Install Terraform & Configure AWS (15 min)
Well install Terraform and set up your AWS credentials on **Ubuntu 24.04**.
### A. Install Terraform
```bash
sudo apt-get update
```
Updates the local package index so your system knows about the latest software versions.
```bash
sudo apt-get install -y gnupg software-properties-common wget
```
Installs essential tools:
- **gnupg** → handles security keys
- **software-properties-common** → helps manage repositories
- **wget** → used to download files
```bash
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp.gpg
```
Downloads and stores HashiCorps official GPG key for verifying Terraform packages.
```bash
echo "deb [signed-by=/usr/share/keyrings/hashicorp.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
```
Adds the HashiCorp package repository to your system so Terraform can be installed and updated.
```bash
sudo apt update && sudo apt install -y terraform && clear
```
Refreshes package list again and installs the latest Terraform release.
```bash
terraform -v
```
Verifies Terraform is installed by printing its version.
### B. Install AWS CLI v2
```bash
sudo apt-get install -y unzip
```
Installs **unzip**, required to extract the AWS CLI package.
```bash
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
```
Downloads the AWS CLI v2 installer.
```bash
unzip awscliv2.zip
```
Extracts the AWS CLI installer files.
```bash
sudo ./aws/install
```
Runs the AWS CLI installer to make it available system-wide.
```bash
aws --version
```
Checks that AWS CLI is installed successfully.
### C. Configure AWS credentials
```bash
aws configure
```
Starts interactive setup for AWS CLI. Enter:
- **AWS Access Key ID**: (from your IAM user)
- **AWS Secret Access Key**: (from your IAM user)
- **Default region name**: `ap-south-1` (Mumbai, for this lab)
- **Default output format**: `json`
This creates config files (~/.aws/config and ~/.aws/credentials) so both AWS CLI and Terraform can connect securely to AWS.

View File

@ -1 +0,0 @@
## Initial task

94
docs/02_first_ec2.md Normal file
View File

@ -0,0 +1,94 @@
# 2) Your First Terraform Apply: EC2 (20 min)
In this section, you will create your **first Terraform project** and use it to launch an Amazon EC2 instance in your AWS account. This is the "Hello World" of Terraform on AWS.
## Step 1: Create a new project directory
```bash
mkdir -p ~/terraform-ec2-lab && cd ~/terraform-ec2-lab
```
- `mkdir -p ~/terraform-ec2-lab` → Creates a folder called `terraform-ec2-lab` in your home directory. The `-p` option ensures the folder is created even if parent directories dont exist.
- `cd ~/terraform-ec2-lab` → Moves into this new folder so you can keep your Terraform files organized.
This directory will hold all the configuration files for this project.
## Step 2: Create the main configuration file
Create a new file named **main.tf** and paste the following code:
```hcl
provider "aws" {
region = "ap-south-1"
}
resource "aws_instance" "lab_instance" {
ami = "ami-0e6329e222e662a52" # Amazon Linux 2 (Mumbai)
instance_type = "t2.micro"
tags = {
Name = "Terraform-Lab-Instance"
}
}
```
### Explanation of the code
- **provider "aws"**
Tells Terraform to use the AWS provider. The region is set to `ap-south-1` (Mumbai). This determines where your resources will be created.
- **resource "aws_instance" "lab_instance"**
Declares that we want to create an EC2 instance resource in AWS.
- `ami`: The Amazon Machine Image (AMI) ID that defines the OS. Here we use Amazon Linux 2 in the Mumbai region.
- `instance_type`: Specifies the hardware size of the instance. `t2.micro` is eligible for AWS Free Tier.
- `tags`: Adds a tag so the instance will appear in AWS Console with the name **Terraform-Lab-Instance**.
## Step 3: Initialize Terraform
```bash
terraform init
```
- Downloads the AWS provider plugin.
- Prepares your working directory for use with Terraform.
## Step 4: Preview the changes
```bash
terraform plan
```
- Shows the actions Terraform **will take** without actually applying them.
- Useful for double-checking that the configuration does what you expect.
## Step 5: Apply the configuration
```bash
terraform apply -auto-approve
```
- Creates the EC2 instance in AWS according to your configuration.
- The `-auto-approve` flag skips the interactive “yes/no” confirmation step. (Normally Terraform asks before making changes.)
## Step 6: Verify in AWS Console
1. Log in to the AWS Management Console.
2. Navigate to **EC2 → Instances**.
3. You should see a running instance named **Terraform-Lab-Instance**.
4. Confirm its details:
- Instance type: `t2.micro`
- AMI: Amazon Linux 2
- Region: ap-south-1 (Mumbai)
## Wrap-Up
Congratulations! 🎉 You just:
- Wrote your first Terraform configuration.
- Initialized Terraform.
- Planned and applied infrastructure changes.
- Verified the deployed EC2 instance in AWS.
This is the foundation of Infrastructure as Code: **describe what you want → apply → verify**.

View File

@ -0,0 +1,110 @@
# 3) Remote State with S3 + DynamoDB Lock (25 min)
By default, Terraform stores its state in a local file called **terraform.tfstate**.
This works fine for single-user setups, but in a team environment it can cause conflicts (two people updating at once).
To solve this, we move the state file to a **remote backend** (Amazon S3) and use **DynamoDB** for state locking.
This ensures only one person can apply changes at a time.
## Step A: Create an S3 bucket for remote state
```bash
aws s3api create-bucket --bucket my-terraform-state-lab --region ap-south-1
```
- Creates a new S3 bucket called `my-terraform-state-lab`.
- The bucket name must be **globally unique** across AWS. Change it to something like `terraform-state-yourname123`.
```bash
aws s3api put-bucket-versioning --bucket my-terraform-state-lab --versioning-configuration Status=Enabled
```
- Enables **versioning** on the bucket.
- This allows you to roll back to older state files if something goes wrong.
## Step B: Create a DynamoDB table for state locking
```bash
aws dynamodb create-table --table-name terraform-locks --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --billing-mode PAY_PER_REQUEST
```
- Creates a DynamoDB table called `terraform-locks`.
- The `LockID` column acts as a **lock key**.
- When Terraform runs, it inserts a lock entry in this table. This prevents two users from applying changes at the same time.
## Step C: Configure backend in Terraform
Create a new project folder and move into it:
```bash
mkdir -p ~/terraform-remote-lab && cd ~/terraform-remote-lab
```
Now create a file called **main.tf**:
```hcl
terraform {
backend "s3" {
bucket = "my-terraform-state-lab" # replace with your bucket name
key = "dev/terraform.tfstate" # file path inside the bucket
region = "ap-south-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
provider "aws" {
region = "ap-south-1"
}
resource "aws_s3_bucket" "demo" {
bucket = "my-demo-bucket-student12345" # must be unique
acl = "private"
}
```
### Explanation of the configuration
- **backend "s3"** → Tells Terraform to store state in S3.
- `bucket`: Your S3 bucket name.
- `key`: Path/name of the state file inside the bucket.
- `region`: Region of your bucket.
- `dynamodb_table`: Table used for locks.
- `encrypt`: Ensures the state file is encrypted at rest.
- **provider "aws"** → Tells Terraform to use AWS as the provider.
- **resource "aws_s3_bucket" "demo"** → A sample resource (an S3 bucket) to test remote state functionality.
## Step D: Initialize the backend
```bash
terraform init
```
- Initializes the working directory.
- Terraform will detect the `backend "s3"` block and migrate your local state to S3.
- You may be asked: *Do you want to copy existing state to the new backend?* → type `yes`.
## Step E: Apply and verify
```bash
terraform apply -auto-approve
```
- Creates the demo bucket defined in `main.tf`.
- Stores the state file in **S3** instead of locally.
- Uses **DynamoDB** to prevent parallel execution.
Now check in AWS Console:
- Go to **S3 → your bucket → dev/terraform.tfstate** → you should see the state file.
- Go to **DynamoDB → terraform-locks** → while Terraform runs, a lock record will appear.
## Wrap-Up
You have now:
- Configured Terraform to use a **remote state** in S3.
- Added **versioning** for rollback.
- Used **DynamoDB locking** for safe team collaboration.
This setup is considered a best practice for production environments.

View File

@ -0,0 +1,142 @@
# 4) Variables, tfvars & Outputs (20 min)
In this section, youll **parameterize** your Terraform configuration so its flexible across environments (like *dev* and *prod*) and learn how to **print useful values** after deployment.
## Why variables and tfvars?
- **Variables** let you avoid hardcoding values (e.g., instance type, names).
- **.tfvars files** store a specific set of variable values for each environment. You can switch environments quickly without editing code.
- **Outputs** print important information (IDs, IPs, AZs) for later use in scripts, CD pipelines, or as inputs to other modules.
## Step 1: Create a working directory
```bash
mkdir -p ~/terraform-vars-lab && cd ~/terraform-vars-lab
```
Creates a clean folder for this exercise and moves into it.
## Step 2: Define variables
Create a file **variables.tf**:
```hcl
variable "instance_type" {}
variable "instance_name" {}
```
- Declares two variables, `instance_type` and `instance_name`.
- No defaults are provided, so Terraform will expect values (either via `-var` flags, `.tfvars` files, or prompts).
> Tip: You can add descriptions and types to make your code self-documenting:
> ```hcl
> variable "instance_type" { type = string, description = "EC2 size, e.g. t2.micro" }
> variable "instance_name" { type = string, description = "Tag: Name for the instance" }
> ```
## Step 3: Write the main configuration
Create **main.tf**:
```hcl
provider "aws" { region = "ap-south-1" }
resource "aws_instance" "ec2_instance" {
ami = "ami-0e6329e222e662a52"
instance_type = var.instance_type
tags = { Name = var.instance_name }
}
```
- **provider "aws"**: points Terraform to AWS in the Mumbai region.
- **resource "aws_instance" "ec2_instance"**: creates an EC2 instance.
- `ami`: the base OS image (Amazon Linux 2 for ap-south-1).
- `instance_type`: references your variable using `var.instance_type`.
- `tags`: uses `var.instance_name` so the tag can change per environment.
## Step 4: Expose useful outputs
Create **outputs.tf**:
```hcl
output "instance_id" { value = aws_instance.ec2_instance.id }
output "public_ip" { value = aws_instance.ec2_instance.public_ip }
output "availability_zone" { value = aws_instance.ec2_instance.availability_zone }
```
- After apply, Terraform will print these values.
- Great for quickly finding the instance or for handing values to other tools.
> Tip: Add `sensitive = true` to hide secrets in logs:
> ```hcl
> output "db_password" { value = var.db_password, sensitive = true }
> ```
## Step 5: Create environment files (tfvars)
Create **dev.tfvars**:
```hcl
instance_type = "t2.micro"
instance_name = "Dev-Instance"
```
Create **prod.tfvars**:
```hcl
instance_type = "t3.micro"
instance_name = "Prod-Instance"
```
- These files define different values for the *same* variables.
- You can add more environments (e.g., `test.tfvars`, `staging.tfvars`) as needed.
> Tip: Keep environment files in version control, but never commit secrets. For secrets, use a secure tool (e.g., AWS SSM Parameter Store, Vault) or CI/CDcontrolled variables.
## Step 6: Initialize, apply, output, and switch environments
```bash
terraform init
```
- Downloads provider plugins and prepares the working directory.
```bash
terraform apply -var-file="dev.tfvars" -auto-approve
```
- Applies with **dev** values.
- Creates a `t2.micro` instance named **Dev-Instance**.
```bash
terraform output
```
- Prints the values from **outputs.tf** to the terminal (instance id, public IP, AZ).
```bash
terraform destroy -auto-approve
```
- Destroys the **dev** instance to avoid charges before testing **prod**.
```bash
terraform apply -var-file="prod.tfvars" -auto-approve
```
- Applies with **prod** values.
- Creates a `t3.micro` instance named **Prod-Instance**.
```bash
terraform output -json > output.json
cat output.json
```
- Exports outputs in **JSON** format, perfect for automation pipelines.
- `output.json` can be parsed by scripts or downstream systems.
## Common gotchas & best practices
- **Keep AMIs per region**: AMI IDs are regionspecific. If you change regions, update the AMI.
- **Validate before apply**: Run `terraform validate` and `terraform plan` to catch mistakes early.
- **Naming**: Tags are your friend. Use `Name` tags consistently so you can find resources fast.
- **Separate state**: For real projects, use separate state/workspaces or remote backends (S3 + DynamoDB) per environment.
## Wrap-Up
You now know how to:
- Declare variables and pass values with `.tfvars`
- Reuse the same code for **multiple environments**
- Surface key information using **outputs**
- Export outputs to **JSON** for automation
This pattern is the backbone of scalable Terraform projects.

135
docs/05_modules_reuse.md Normal file
View File

@ -0,0 +1,135 @@
# 5) Build & Reuse a Simple Module (25 min)
In this section, youll turn repeated EC2 logic into a **reusable Terraform module**.
Modules make your code **DRY** (Dont Repeat Yourself), easier to test, and simpler to use across teams and environments.
## What is a module?
A **module** is just a folder that contains Terraform configuration files (`.tf`).
Your **root module** (the folder where you run the CLI) can **call** other modules from:
- A local path (`./modules/ec2-instance`)
- A Git repo (`git::https://...`), or
- A registry (`registry.terraform.io`)
Here well use a **local** module.
## Step 1: Create the project & module folder
```bash
mkdir -p ~/terraform-modules-lab/modules/ec2-instance
cd ~/terraform-modules-lab
```
- Creates a workspace with a nested folder `modules/ec2-instance` that will contain reusable EC2 code.
Your tree will look like:
```
terraform-modules-lab/
main.tf # root (calls the module)
modules/
ec2-instance/
variables.tf
main.tf # module implementation
outputs.tf
```
## Step 2: Define module inputs (variables)
Create **modules/ec2-instance/variables.tf**:
```hcl
variable "instance_type" { description = "Type of EC2 instance"; default = "t2.micro" }
variable "instance_name" { description = "Tag name for instance" }
variable "instance_count" { description = "Number of EC2"; default = 1 }
```
- These are **inputs** the module expects.
- `default` makes inputs optional (callers can override).
- You can add stronger typing & validation for safety:
```hcl
variable "instance_type" {
type = string
description = "EC2 size"
default = "t2.micro"
validation {
condition = can(regex("^t[23]\.", var.instance_type))
error_message = "Use a t2.* or t3.* instance for this lab."
}
}
```
## Step 3: Implement the module logic
Create **modules/ec2-instance/main.tf**:
```hcl
resource "aws_instance" "this" {
count = var.instance_count
ami = "ami-0e6329e222e662a52"
instance_type = var.instance_type
tags = { Name = "${var.instance_name}-${count.index}" }
}
```
- Uses `count` to create **N instances** with a single block.
- `tags.Name` includes the `count.index` suffix so each instance has a unique name (e.g., `App-Server-0`, `App-Server-1`).
> **Note on AMIs:** AMI IDs are **region-specific**. We use an Amazon Linux 2 AMI for **ap-south-1 (Mumbai)**. If you switch regions, update this AMI or fetch it dynamically (e.g., with a data source).
## Step 4: Expose useful outputs
Create **modules/ec2-instance/outputs.tf**:
```hcl
output "instance_ids" { value = [for i in aws_instance.this : i.id] }
output "public_ips" { value = [for i in aws_instance.this : i.public_ip] }
```
- Makes it easy for callers to **consume** important info (IDs, IPs).
- The `for` expression collects values from the resource instances created via `count`.
## Step 5: Call the module from the root
Create the **root** `main.tf` at `~/terraform-modules-lab/main.tf`:
```hcl
provider "aws" { region = "ap-south-1" }
module "ec2_instance" {
source = "./modules/ec2-instance"
instance_type = "t2.micro"
instance_name = "App-Server"
instance_count = 2
}
```
- `source` points to the **local path** of your module folder.
- Inputs (`instance_type`, `instance_name`, `instance_count`) are set by the **caller** (the root module).
- You can define **multiple** module blocks to create different groups of instances, or use `.tfvars` files for environments.
## Step 6: Initialize and apply
```bash
terraform init
terraform apply -auto-approve
```
- `terraform init` downloads providers and **fetches module sources** (local/Git/registry).
- `terraform apply` creates **2 EC2 instances** named `App-Server-0` and `App-Server-1` in **ap-south-1**.
After the apply, view outputs:
```bash
terraform output
```
You should see lists for `instance_ids` and `public_ips`.
## How to reuse this module elsewhere
- Copy the `modules/ec2-instance` folder into any Terraform project and call it with `source = "./modules/ec2-instance"`.
- Or publish the module to a **Git repo** and reference with `source = "git::https://github.com/yourorg/yourrepo//modules/ec2-instance?ref=v1.0.0"`.
- Standardize inputs/outputs so teams can use it without reading internals.
## Best practices
- **Version control your modules** (Git tags) to avoid breaking changes.
- Add **README.md** inside the module with usage examples and input/output docs.
- Prefer **types & validation** for variables.
- Keep AMIs **parametrized** or discover them via a `data "aws_ami"` query to avoid hard-coding.
- If multiple teams use the same module, consider a **private Terraform registry** or a Git monorepo with clear versioning.
## Cleanup
When done (to avoid charges):
```bash
terraform destroy -auto-approve
```
## Summary
You built a **reusable EC2 module**, exposed clean **inputs/outputs**, and consumed it from the **root**.
This pattern scales to VPCs, RDS, ALBs, and more—compose modules like building blocks to create reliable infrastructure at speed.

View File

@ -0,0 +1,44 @@
# 6) Nested Modules: VPC + EC2 (1015 min)
Create a minimal VPC module and keep EC2 separate for clarity.
```bash
mkdir -p ~/terraform-modules-lab/modules/vpc
cd ~/terraform-modules-lab
```
**modules/vpc/main.tf**
```hcl
resource "aws_vpc" "this" { cidr_block = "10.0.0.0/16" }
resource "aws_subnet" "public" {
vpc_id = aws_vpc.this.id
cidr_block = "10.0.1.0/24"
availability_zone = "ap-south-1a"
}
```
**modules/vpc/outputs.tf**
```hcl
output "vpc_id" { value = aws_vpc.this.id }
output "subnet_id" { value = aws_subnet.public.id }
```
Update **root main.tf** to include both modules:
```hcl
provider "aws" { region = "ap-south-1" }
module "vpc" { source = "./modules/vpc" }
module "ec2_instance" {
source = "./modules/ec2-instance"
instance_type = "t2.micro"
instance_name = "Nested-App"
instance_count = 1
}
```
Apply:
```bash
terraform apply -auto-approve
```

21
docs/07_cleanup.md Normal file
View File

@ -0,0 +1,21 @@
# 7) Cleanup (5 min)
Always delete resources to avoid charges.
From each working directory you used:
```bash
terraform destroy -auto-approve || true
```
If you created the S3 bucket and DynamoDB table for backend testing, remove them **after** youve destroyed all Terraform-managed resources and the state is no longer needed.
Delete S3 (must be empty first):
```bash
aws s3 rm s3://my-terraform-state-lab --recursive
aws s3api delete-bucket --bucket my-terraform-state-lab --region ap-south-1
```
Delete DynamoDB table:
```bash
aws dynamodb delete-table --table-name terraform-locks
```

171
docs/08_test.md Normal file
View File

@ -0,0 +1,171 @@
Markdown Quick Reference
========================
This guide is a very brief overview, with examples, of the syntax that [Markdown] supports. It is itself written in Markdown and you can copy the samples over to the left-hand pane for experimentation. It's shown as *text* and not *rendered HTML*.
[Markdown]: http://daringfireball.net/projects/markdown/
Simple Text Formatting
======================
First thing is first. You can use *stars* or _underscores_ for italics. **Double stars** and __double underscores__ for bold. ***Three together*** for ___both___.
Paragraphs are pretty easy too. Just have a blank line between chunks of text.
> This chunk of text is in a block quote. Its multiple lines will all be
> indented a bit from the rest of the text.
>
> > Multiple levels of block quotes also work.
Sometimes you want to include code, such as when you are explaining how `<h1>` HTML tags work, or maybe you are a programmer and you are discussing `someMethod()`.
If you want to include code and have new
lines preserved, indent the line with a tab
or at least four spaces:
Extra spaces work here too.
This is also called preformatted text and it is useful for showing examples.
The text will stay as text, so any *markdown* or <u>HTML</u> you add will
not show up formatted. This way you can show markdown examples in a
markdown document.
> You can also use preformatted text with your blockquotes
> as long as you add at least five spaces.
Headings
========
There are a couple of ways to make headings. Using three or more equals signs on a line under a heading makes it into an "h1" style. Three or more hyphens under a line makes it "h2" (slightly smaller). You can also use multiple pound symbols (`#`) before and after a heading. Pounds after the title are ignored. Here are some examples:
This is H1
==========
This is H2
----------
# This is H1
## This is H2
### This is H3 with some extra pounds ###
#### You get the idea ####
##### I don't need extra pounds at the end
###### H6 is the max
Links
=====
Let's link to a few sites. First, let's use the bare URL, like <https://www.github.com>. Great for text, but ugly for HTML.
Next is an inline link to [Google](https://www.google.com). A little nicer.
This is a reference-style link to [Wikipedia] [1].
Lastly, here's a pretty link to [Yahoo]. The reference-style and pretty links both automatically use the links defined below, but they could be defined *anywhere* in the markdown and are removed from the HTML. The names are also case insensitive, so you can use [YaHoO] and have it link properly.
[1]: https://www.wikipedia.org
[Yahoo]: https://www.yahoo.com
Title attributes may be added to links by adding text after a link.
This is the [inline link](https://www.bing.com "Bing") with a "Bing" title.
You can also go to [W3C] [2] and maybe visit a [friend].
[2]: https://w3c.org (The W3C puts out specs for web-based things)
[Friend]: https://facebook.com "Facebook!"
Email addresses in plain text are not linked: test@example.com.
Email addresses wrapped in angle brackets are linked: <test@example.com>.
They are also obfuscated so that email harvesting spam robots hopefully won't get them.
Lists
=====
* This is a bulleted list
* Great for shopping lists
- You can also use hyphens
+ Or plus symbols
The above is an "unordered" list. Now, on for a bit of order.
1. Numbered lists are also easy
2. Just start with a number
3738762. However, the actual number doesn't matter when converted to HTML.
1. This will still show up as 4.
You might want a few advanced lists:
- This top-level list is wrapped in paragraph tags
- This generates an extra space between each top-level item.
- You do it by adding a blank line
- This nested list also has blank lines between the list items.
- How to create nested lists
1. Start your regular list
2. Indent nested lists with two spaces
3. Further nesting means you should indent with two more spaces
* This line is indented with four spaces.
- List items can be quite lengthy. You can keep typing and either continue
them on the next line with no indentation.
- Alternately, if that looks ugly, you can also
indent the next line a bit for a prettier look.
- You can put large blocks of text in your list by just indenting with two spaces.
This is formatted the same as code, but you can inspect the HTML
and find that it's just wrapped in a `<p>` tag and *won't* be shown
as preformatted text.
You can keep adding more and more paragraphs to a single
list item by adding the traditional blank line and then keep
on indenting the paragraphs with two spaces.
You really only need to indent the first line,
but that looks ugly.
- Lists support blockquotes
> Just like this example here. By the way, you can
> nest lists inside blockquotes!
> - Fantastic!
- Lists support preformatted text
You just need to indent an additional four spaces.
Even More
=========
Horizontal Rule
---------------
If you need a horizontal rule you just need to put at least three hyphens, asterisks, or underscores on a line by themselves. You can also even put spaces between the characters.
---
****************************
_ _ _ _ _ _ _
Those three all produced horizontal lines. Keep in mind that three hyphens under any text turns that text into a heading, so add a blank like if you use hyphens.
Images
------
Images work exactly like links, but they have exclamation points in front. They work with references and titles too.
![Google Logo](https://www.google.com/images/errors/logo_sm.gif) and ![Happy].
[Happy]: https://wpclipart.com/smiley/happy/simple_colors/smiley_face_simple_green_small.png ("Smiley face")
Inline HTML
-----------
If markdown is too limiting, you can just insert your own <strike>crazy</strike> HTML. Span-level HTML <u>can *still* use markdown</u>. Block level elements must be separated from text by a blank line and must not have any spaces before the opening and closing HTML.
<div style='font-family: "Comic Sans MS", "Comic Sans", cursive;'>
It is a pity, but markdown does **not** work in here for most markdown parsers.
[Marked] handles it pretty well.
</div>

View File

@ -0,0 +1 @@
# Variables, tfvars, Outputs, Count & State

View File

@ -1,16 +1,54 @@
title: labzy
version: 1.0.0
description: Descrition for tasks .
overview_path: docs/00_overview.md
title: "Terraform on Ubuntu 24 Instruction Lab (2 Hours)"
version: "1.0.0"
description: |
Beginner-friendly, step-by-step Terraform hands-on lab for Ubuntu 24.04.
You will install Terraform, configure AWS CLI, create your first EC2 instance,
learn variables/outputs/count/state basics, set up a remote backend with S3 + DynamoDB
for locking, and practice reusable modules.
type: "instruction"
target_os: "Ubuntu 24.04"
duration_minutes: 120
overview_path: "docs/00_overview.md"
sections:
title: Instructions
title: "Instructions"
items:
- id: 1
title: Task 1
path: docs/01_task.md
estimated_minutes: 7
dependencies: []
ui:
default_open_section_id: 1
show_toc: true
collapsible_sections: true
- id: 1
title: "Install Terraform & Configure AWS"
path: "docs/01_install_setup.md"
estimated_minutes: 15
dependencies: []
- id: 2
title: "Your First Terraform Apply: EC2"
path: "docs/02_first_ec2.md"
estimated_minutes: 20
dependencies: [1]
- id: 3
title: "Remote State with S3 + DynamoDB Lock"
path: "docs/03_remote_state_s3_dynamodb.md"
estimated_minutes: 25
dependencies: [2]
- id: 4
title: "Variables, tfvars & Outputs"
path: "docs/04_variables_tfvars_outputs.md"
estimated_minutes: 20
dependencies: [2]
- id: 5
title: "Build & Reuse a Simple Module"
path: "docs/05_modules_reuse.md"
estimated_minutes: 25
dependencies: [4]
- id: 6
title: "Nested Modules: VPC + EC2"
path: "docs/06_nested_modules_vpc_ec2.md"
estimated_minutes: 15
dependencies: [5]
- id: 7
title: "Cleanup"
path: "docs/07_cleanup.md"
estimated_minutes: 10
dependencies: [2,3,4,5,6]
- id: 8
title: "Test data set"
path: "docs/08_test.md"
estimated_minutes: 10
dependencies: []