<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>IaC Archives - InfoTech Ninja</title>
	<atom:link href="https://infotechninja.com/tag/iac/feed/" rel="self" type="application/rss+xml" />
	<link>https://infotechninja.com/tag/iac/</link>
	<description></description>
	<lastBuildDate>Tue, 21 Apr 2026 00:00:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Terraform on AWS: Managing Infrastructure as Code Without the Headaches</title>
		<link>https://infotechninja.com/terraform-aws-iac/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=terraform-aws-iac</link>
					<comments>https://infotechninja.com/terraform-aws-iac/#respond</comments>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Cloud & DevOps]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[CloudFormation]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[IaC]]></category>
		<category><![CDATA[Terraform]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=2</guid>

					<description><![CDATA[<p>Infrastructure as Code with Terraform has become the standard approach for managing cloud resources at any meaningful scale. Unlike CloudFormation, Terraform's provider model works across AWS, Azure, GCP, and dozens of other platforms.</p>
<p>The post <a href="https://infotechninja.com/terraform-aws-iac/">Terraform on AWS: Managing Infrastructure as Code Without the Headaches</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Infrastructure as Code with Terraform has become the standard approach for managing cloud resources at any meaningful scale. Unlike CloudFormation, Terraform&#8217;s provider model works across AWS, Azure, GCP, and dozens of other platforms. Once you understand the mental model, you&#8217;ll never want to click through the AWS console to provision infrastructure again.</p>
<h2>The Terraform Mental Model</h2>
<p>Terraform operates on a plan-then-apply lifecycle. You write declarative HCL (HashiCorp Configuration Language) code describing the desired state of your infrastructure. When you run <code>terraform plan</code>, Terraform compares your code against the current state (tracked in a state file) and produces a diff showing exactly what will be created, modified, or destroyed. Only when you run <code>terraform apply</code> does Terraform actually make API calls to provision or change resources.</p>
<p>This separation of plan and apply is what makes Terraform safe for production use. You can review every change before it happens, integrate plan output into pull request reviews, and require approvals for changes to production environments. The deterministic nature — the same code always produces the same infrastructure — is what separates IaC from the chaos of manually configured environments where nobody is quite sure what&#8217;s actually deployed or why.</p>
<h2>Structuring Your First AWS VPC</h2>
<p>A well-structured VPC is the foundation of secure AWS architecture. The standard pattern is a VPC with public subnets (for load balancers and NAT gateways), private subnets (for application tier), and isolated subnets (for databases and other sensitive resources). Each subnet tier spans multiple availability zones for redundancy. Internet access for private subnets routes through NAT Gateways in the public subnets, so outbound traffic is possible without exposing private resources directly.</p>
<p>Terraform makes this repeatable. Define your CIDR blocks as variables, use <code>count</code> or <code>for_each</code> to create subnets across AZs dynamically, and use <code>aws_route_table</code> resources to wire up routing. A well-organized Terraform module for VPC creation can be instantiated repeatedly for dev, staging, and production environments with just a few variable changes.</p>
<pre><code># vpc/main.tf — Core VPC and subnets
variable "vpc_cidr"         { default = "10.0.0.0/16" }
variable "public_subnets"   { default = ["10.0.1.0/24", "10.0.2.0/24"] }
variable "private_subnets"  { default = ["10.0.11.0/24", "10.0.12.0/24"] }
variable "azs"              { default = ["us-east-1a", "us-east-1b"] }
variable "env"              { default = "prod" }

resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = { Name = "${var.env}-vpc" }
}

resource "aws_subnet" "public" {
  count                   = length(var.public_subnets)
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnets[count.index]
  availability_zone       = var.azs[count.index]
  map_public_ip_on_launch = true
  tags = { Name = "${var.env}-public-${count.index + 1}" }
}

resource "aws_subnet" "private" {
  count             = length(var.private_subnets)
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnets[count.index]
  availability_zone = var.azs[count.index]
  tags = { Name = "${var.env}-private-${count.index + 1}" }
}

resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id
  tags   = { Name = "${var.env}-igw" }
}</code></pre>
<h2>State Files: The Most Important File You Never Edit</h2>
<p>Terraform&#8217;s state file (<code>terraform.tfstate</code>) is the source of truth that maps your HCL code to real cloud resources. Never edit it manually, never delete it, and never store it in a local filesystem for anything beyond personal experiments. For team environments and production use, store state in a remote backend: S3 with DynamoDB locking for AWS, Azure Blob Storage for Azure, or Terraform Cloud. Remote backends enable collaboration, state locking (preventing concurrent runs from corrupting state), and encryption at rest.</p>
<p>State locking is particularly important in CI/CD pipelines where multiple engineers or automated runs might trigger Terraform simultaneously. Without locking, two concurrent <code>terraform apply</code> runs can corrupt the state file, leaving your infrastructure in an unknown state. DynamoDB-backed state locking on AWS is straightforward to configure and adds minimal overhead. Make configuring remote state the very first thing you do in any new Terraform project — retrofitting it later is painful.</p>
<h2>Modules for Reusable Infrastructure</h2>
<p>Terraform modules are the building blocks of reusable infrastructure. A module is simply a directory of <code>.tf</code> files with defined inputs (variables) and outputs. The VPC example above is a perfect module candidate — you define it once and call it with different parameters for each environment. The Terraform Registry hosts thousands of community and official modules for common patterns like VPCs, EKS clusters, RDS instances, and more, saving substantial boilerplate.</p>
<p>Module versioning is critical for production stability. Pin your module sources to specific version tags rather than using &#8220;latest&#8221; or a branch. When you&#8217;re ready to upgrade a module, test it in a non-production environment first. Structure your Terraform codebase into separate state roots (workspaces or separate directories) for each environment, with a shared module library. This prevents a staging change from accidentally affecting production state and gives you clear promotion paths through environments.</p>
<p>The post <a href="https://infotechninja.com/terraform-aws-iac/">Terraform on AWS: Managing Infrastructure as Code Without the Headaches</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://infotechninja.com/terraform-aws-iac/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20</post-id>	</item>
		<item>
		<title>Getting Started with Ansible: Write Your First Playbook in 30 Minutes</title>
		<link>https://infotechninja.com/ansible-playbooks-beginner/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ansible-playbooks-beginner</link>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Ansible]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[IaC]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=10</guid>

					<description><![CDATA[<p>Ansible is the Swiss Army knife of infrastructure automation. No agents to install, no complex architecture to manage — just Python, SSH, and YAML. If you know your way around Linux, you can write a useful playbook today.</p>
<p>The post <a href="https://infotechninja.com/ansible-playbooks-beginner/">Getting Started with Ansible: Write Your First Playbook in 30 Minutes</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Ansible is the Swiss Army knife of infrastructure automation. No agents to install, no complex master-client architecture to manage, no proprietary DSL to learn. Just Python, SSH, and YAML. If you can write a shell script and you know your way around Linux, you can write a useful Ansible playbook today.</p>
<h2>No Agents, No Drama: The Ansible Model</h2>
<p>Ansible&#8217;s architecture is deliberately simple. The control node (your workstation, a bastion host, or a CI/CD runner) connects to managed nodes over SSH (or WinRM for Windows), executes tasks, and disconnects. There&#8217;s no agent process running on managed nodes consuming resources or requiring updates. The only requirement on managed nodes is Python (which is present on virtually every Linux system) and a user account with appropriate sudo privileges.</p>
<p>Idempotency is Ansible&#8217;s most important design principle. Every built-in module is designed so that running a task multiple times produces the same result as running it once. If nginx is already installed and running, the <code>package</code> and <code>service</code> modules report &#8220;OK&#8221; rather than reinstalling or restarting. This means you can run your playbooks safely at any time — to enforce desired state, to verify compliance, or to bring a drifted system back in line — without worrying about destructive side effects.</p>
<h2>Your Inventory File</h2>
<p>Ansible&#8217;s inventory defines the hosts and groups your playbooks target. The simplest format is an INI-style text file. You can also use YAML format or dynamic inventory plugins that query cloud provider APIs (AWS EC2, Azure VMs, GCP instances) to build the inventory automatically from your actual running infrastructure — no manual list maintenance required.</p>
<p>Groups make targeting flexible. You might have a <code>[webservers]</code> group, a <code>[dbservers]</code> group, and a <code>[production]</code> group containing hosts from both. Group variables (stored in <code>group_vars/</code> directories) let you define different settings per group — staging might use a self-signed certificate while production uses a real one, but the same playbook handles both.</p>
<pre><code># inventory/hosts.ini — Sample inventory file
[webservers]
web01.example.com ansible_user=ubuntu
web02.example.com ansible_user=ubuntu

[dbservers]
db01.example.com ansible_user=ubuntu ansible_port=2222

[production:children]
webservers
dbservers

[production:vars]
env=production
nginx_worker_processes=4

# Test connectivity to all hosts:
# ansible all -i inventory/hosts.ini -m ping</code></pre>
<h2>Writing Your First Playbook</h2>
<p>A playbook is a YAML file containing one or more plays. Each play targets a group of hosts and defines a list of tasks to execute in order. Tasks use modules — the unit of work in Ansible. There are thousands of built-in modules covering package management, file operations, service management, cloud resources, network devices, and more. The module name is self-documenting: <code>ansible.builtin.apt</code>, <code>ansible.builtin.copy</code>, <code>ansible.builtin.systemd</code>.</p>
<p>The playbook below installs nginx, deploys a custom configuration, and ensures the service is running and enabled. Run it with <code>ansible-playbook -i inventory/hosts.ini playbooks/nginx.yml</code>. Add <code>--check</code> for a dry run that shows what would change without making any changes.</p>
<pre><code># playbooks/nginx.yml — Install and configure nginx
---
- name: Configure web servers
  hosts: webservers
  become: true

  vars:
    nginx_port: 80
    server_name: "{{ inventory_hostname }}"

  tasks:
    - name: Ensure nginx is installed
      ansible.builtin.package:
        name: nginx
        state: present

    - name: Deploy nginx configuration
      ansible.builtin.template:
        src: templates/nginx.conf.j2
        dest: /etc/nginx/sites-available/default
        owner: root
        group: root
        mode: "0644"
      notify: Reload nginx

    - name: Ensure nginx is started and enabled
      ansible.builtin.systemd:
        name: nginx
        state: started
        enabled: true

  handlers:
    - name: Reload nginx
      ansible.builtin.systemd:
        name: nginx
        state: reloaded</code></pre>
<h2>Variables, Templates, and Handlers</h2>
<p>Variables in Ansible can be defined at multiple levels — in the playbook itself, in separate variable files, in the inventory, or passed on the command line with <code>-e</code>. Jinja2 templates (files ending in <code>.j2</code>) use double-brace syntax (<code>{{ variable_name }}</code>) to interpolate variables into configuration files. This is how one nginx template can produce correctly configured files for dozens of different servers, each with the right hostname, port, and SSL certificate path.</p>
<p>Handlers are tasks that only run when notified by another task. The classic use case: a task that updates a configuration file notifies the &#8220;Reload nginx&#8221; handler. If the configuration file didn&#8217;t change (because it was already correct), the handler never fires and nginx isn&#8217;t unnecessarily reloaded. If multiple tasks in a play all notify the same handler, the handler runs only once at the end of the play. This prevents redundant service restarts and makes your playbooks more efficient.</p>
<p>The post <a href="https://infotechninja.com/ansible-playbooks-beginner/">Getting Started with Ansible: Write Your First Playbook in 30 Minutes</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">28</post-id>	</item>
	</channel>
</rss>
