<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>DevOps Archives - InfoTech Ninja</title>
	<atom:link href="https://infotechninja.com/tag/devops/feed/" rel="self" type="application/rss+xml" />
	<link>https://infotechninja.com/tag/devops/</link>
	<description></description>
	<lastBuildDate>Tue, 21 Apr 2026 00:00:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Terraform on AWS: Managing Infrastructure as Code Without the Headaches</title>
		<link>https://infotechninja.com/terraform-aws-iac/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=terraform-aws-iac</link>
					<comments>https://infotechninja.com/terraform-aws-iac/#respond</comments>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Cloud & DevOps]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[CloudFormation]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[IaC]]></category>
		<category><![CDATA[Terraform]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=2</guid>

					<description><![CDATA[<p>Infrastructure as Code with Terraform has become the standard approach for managing cloud resources at any meaningful scale. Unlike CloudFormation, Terraform's provider model works across AWS, Azure, GCP, and dozens of other platforms.</p>
<p>The post <a href="https://infotechninja.com/terraform-aws-iac/">Terraform on AWS: Managing Infrastructure as Code Without the Headaches</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Infrastructure as Code with Terraform has become the standard approach for managing cloud resources at any meaningful scale. Unlike CloudFormation, Terraform&#8217;s provider model works across AWS, Azure, GCP, and dozens of other platforms. Once you understand the mental model, you&#8217;ll never want to click through the AWS console to provision infrastructure again.</p>
<h2>The Terraform Mental Model</h2>
<p>Terraform operates on a plan-then-apply lifecycle. You write declarative HCL (HashiCorp Configuration Language) code describing the desired state of your infrastructure. When you run <code>terraform plan</code>, Terraform compares your code against the current state (tracked in a state file) and produces a diff showing exactly what will be created, modified, or destroyed. Only when you run <code>terraform apply</code> does Terraform actually make API calls to provision or change resources.</p>
<p>This separation of plan and apply is what makes Terraform safe for production use. You can review every change before it happens, integrate plan output into pull request reviews, and require approvals for changes to production environments. The deterministic nature — the same code always produces the same infrastructure — is what separates IaC from the chaos of manually configured environments where nobody is quite sure what&#8217;s actually deployed or why.</p>
<h2>Structuring Your First AWS VPC</h2>
<p>A well-structured VPC is the foundation of secure AWS architecture. The standard pattern is a VPC with public subnets (for load balancers and NAT gateways), private subnets (for application tier), and isolated subnets (for databases and other sensitive resources). Each subnet tier spans multiple availability zones for redundancy. Internet access for private subnets routes through NAT Gateways in the public subnets, so outbound traffic is possible without exposing private resources directly.</p>
<p>Terraform makes this repeatable. Define your CIDR blocks as variables, use <code>count</code> or <code>for_each</code> to create subnets across AZs dynamically, and use <code>aws_route_table</code> resources to wire up routing. A well-organized Terraform module for VPC creation can be instantiated repeatedly for dev, staging, and production environments with just a few variable changes.</p>
<pre><code># vpc/main.tf — Core VPC and subnets
variable "vpc_cidr"         { default = "10.0.0.0/16" }
variable "public_subnets"   { default = ["10.0.1.0/24", "10.0.2.0/24"] }
variable "private_subnets"  { default = ["10.0.11.0/24", "10.0.12.0/24"] }
variable "azs"              { default = ["us-east-1a", "us-east-1b"] }
variable "env"              { default = "prod" }

resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = { Name = "${var.env}-vpc" }
}

resource "aws_subnet" "public" {
  count                   = length(var.public_subnets)
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnets[count.index]
  availability_zone       = var.azs[count.index]
  map_public_ip_on_launch = true
  tags = { Name = "${var.env}-public-${count.index + 1}" }
}

resource "aws_subnet" "private" {
  count             = length(var.private_subnets)
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnets[count.index]
  availability_zone = var.azs[count.index]
  tags = { Name = "${var.env}-private-${count.index + 1}" }
}

resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id
  tags   = { Name = "${var.env}-igw" }
}</code></pre>
<h2>State Files: The Most Important File You Never Edit</h2>
<p>Terraform&#8217;s state file (<code>terraform.tfstate</code>) is the source of truth that maps your HCL code to real cloud resources. Never edit it manually, never delete it, and never store it in a local filesystem for anything beyond personal experiments. For team environments and production use, store state in a remote backend: S3 with DynamoDB locking for AWS, Azure Blob Storage for Azure, or Terraform Cloud. Remote backends enable collaboration, state locking (preventing concurrent runs from corrupting state), and encryption at rest.</p>
<p>State locking is particularly important in CI/CD pipelines where multiple engineers or automated runs might trigger Terraform simultaneously. Without locking, two concurrent <code>terraform apply</code> runs can corrupt the state file, leaving your infrastructure in an unknown state. DynamoDB-backed state locking on AWS is straightforward to configure and adds minimal overhead. Make configuring remote state the very first thing you do in any new Terraform project — retrofitting it later is painful.</p>
<h2>Modules for Reusable Infrastructure</h2>
<p>Terraform modules are the building blocks of reusable infrastructure. A module is simply a directory of <code>.tf</code> files with defined inputs (variables) and outputs. The VPC example above is a perfect module candidate — you define it once and call it with different parameters for each environment. The Terraform Registry hosts thousands of community and official modules for common patterns like VPCs, EKS clusters, RDS instances, and more, saving substantial boilerplate.</p>
<p>Module versioning is critical for production stability. Pin your module sources to specific version tags rather than using &#8220;latest&#8221; or a branch. When you&#8217;re ready to upgrade a module, test it in a non-production environment first. Structure your Terraform codebase into separate state roots (workspaces or separate directories) for each environment, with a shared module library. This prevents a staging change from accidentally affecting production state and gives you clear promotion paths through environments.</p>
<p>The post <a href="https://infotechninja.com/terraform-aws-iac/">Terraform on AWS: Managing Infrastructure as Code Without the Headaches</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://infotechninja.com/terraform-aws-iac/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20</post-id>	</item>
		<item>
		<title>Kubernetes Networking Explained: Pods, Services, and Ingress Controllers</title>
		<link>https://infotechninja.com/kubernetes-networking/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=kubernetes-networking</link>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Cloud & DevOps]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Ingress]]></category>
		<category><![CDATA[K8s]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Networking]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=6</guid>

					<description><![CDATA[<p>Kubernetes networking is one of the most common stumbling blocks for people transitioning from traditional VM-based infrastructure. This guide walks through every layer from pod communication to external traffic management.</p>
<p>The post <a href="https://infotechninja.com/kubernetes-networking/">Kubernetes Networking Explained: Pods, Services, and Ingress Controllers</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Kubernetes networking is one of the most common stumbling blocks for people transitioning from traditional VM-based infrastructure. The networking model is fundamentally different from what you&#8217;re used to, but once it clicks, it&#8217;s actually quite elegant. This guide walks through every layer from pod communication to external traffic management.</p>
<h2>The Flat Network Model</h2>
<p>Kubernetes enforces a fundamental networking requirement: every pod must be able to communicate directly with every other pod in the cluster, without NAT. This &#8220;flat network&#8221; model is implemented by a Container Network Interface (CNI) plugin — popular choices include Calico, Flannel, Cilium, and Weave. Each pod gets its own IP address from a cluster-wide CIDR range, and the CNI plugin is responsible for ensuring pods across different nodes can reach each other.</p>
<p>This design has important implications. Because all pods share a flat network space, network policies (covered later) are your primary isolation mechanism — without them, any pod can talk to any other pod by default. This is fine for development but should be hardened for production multi-tenant workloads. The pod IP is ephemeral: when a pod is deleted and recreated, it gets a new IP. This is why you never communicate with pods directly by IP — that&#8217;s what Services are for.</p>
<h2>Services: ClusterIP, NodePort, LoadBalancer</h2>
<p>A Service is a stable network endpoint that abstracts a set of pods. Services use label selectors to determine which pods they route traffic to — when pods are replaced (during a rolling update, for example), the Service automatically routes to the new pods without any configuration change. The kube-proxy component on each node maintains iptables or IPVS rules that implement this routing.</p>
<p>ClusterIP is the default Service type — it creates a virtual IP that&#8217;s only reachable from within the cluster. NodePort exposes the service on a static port on every node&#8217;s external IP — useful for development and simple setups, but not recommended for production. LoadBalancer provisions a cloud load balancer (an AWS ELB, GCP Load Balancer, etc.) that routes external traffic to the Service. For most production workloads, you don&#8217;t use LoadBalancer Services directly for HTTP/HTTPS — you use an Ingress controller instead.</p>
<pre><code># Service YAML — ClusterIP example for an internal API
apiVersion: v1
kind: Service
metadata:
  name: api-service
  namespace: production
spec:
  selector:
    app: api-server
    tier: backend
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 8080
  type: ClusterIP
---
# LoadBalancer Service for a public-facing component
apiVersion: v1
kind: Service
metadata:
  name: frontend-lb
  namespace: production
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
  selector:
    app: frontend
  ports:
    - port: 443
      targetPort: 8443
  type: LoadBalancer</code></pre>
<h2>Ingress: Your Single Entry Point</h2>
<p>An Ingress resource defines HTTP and HTTPS routing rules for external traffic entering the cluster. Instead of provisioning a separate LoadBalancer Service for every application (which creates a separate cloud load balancer and public IP for each), an Ingress controller provides a single entry point that routes traffic based on hostnames and URL paths to different backend Services. This is both more cost-effective and operationally cleaner.</p>
<p>The nginx Ingress controller is the most widely deployed, though cloud providers offer their own (AWS Load Balancer Controller, GKE Ingress). Cert-manager integrates with Ingress to automatically provision and renew TLS certificates from Let&#8217;s Encrypt, making TLS termination at the Ingress level straightforward. Combined, nginx Ingress plus cert-manager handles host-based routing and TLS for your entire cluster through a simple Ingress resource annotation.</p>
<h2>Network Policies for Pod Isolation</h2>
<p>By default, Kubernetes allows all pods to communicate with all other pods. Network Policies let you restrict this by specifying allowed ingress and egress traffic using pod selectors, namespace selectors, and IP blocks. They&#8217;re implemented by the CNI plugin (Calico and Cilium have the richest Network Policy support). A good default posture is a &#8220;deny all&#8221; baseline policy in each namespace, then explicit allow policies for required communication paths.</p>
<p>Network Policies are additive — if multiple policies apply to a pod, all of them are enforced. This makes it safe to layer policies: a namespace-wide deny-all, plus specific allows for your application tiers. Writing and testing Network Policies is notoriously tricky; tools like Cilium&#8217;s Network Policy Editor (a visual tool) combined with careful testing in a staging environment before production rollout will save you significant frustration.</p>
<pre><code># NetworkPolicy — deny all ingress to namespace, then allow specific paths
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-api-from-frontend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-server
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 8080</code></pre>
<p>The post <a href="https://infotechninja.com/kubernetes-networking/">Kubernetes Networking Explained: Pods, Services, and Ingress Controllers</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">24</post-id>	</item>
		<item>
		<title>Docker for SysAdmins: Containers Without the Complexity</title>
		<link>https://infotechninja.com/docker-for-sysadmins/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=docker-for-sysadmins</link>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Cloud & DevOps]]></category>
		<category><![CDATA[Containers]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Microservices]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=9</guid>

					<description><![CDATA[<p>Containers felt like a developer's tool for a long time, but that line has blurred. Today, Docker knowledge is expected of infrastructure engineers. This guide covers everything a sysadmin needs to get productive with containers.</p>
<p>The post <a href="https://infotechninja.com/docker-for-sysadmins/">Docker for SysAdmins: Containers Without the Complexity</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Containers felt like a developer&#8217;s tool for a long time — something the app team used while sysadmins kept running VMs. That line has blurred considerably. Today, Docker knowledge is expected of infrastructure engineers, and understanding containers makes you far better at managing the platforms (Kubernetes, ECS, App Service) that run them at scale.</p>
<h2>How Containers Differ from VMs</h2>
<p>A virtual machine runs a full guest operating system on top of a hypervisor. Each VM has its own kernel, its own memory allocation, and its own virtual hardware — which is why VMs take minutes to boot and consume gigabytes of RAM even before your application starts. Containers take a different approach: they share the host OS kernel and use Linux kernel features (namespaces for isolation, cgroups for resource limits) to create isolated process environments. A container starts in seconds and consumes only the memory your application actually needs.</p>
<p>The isolation a container provides is less complete than a VM&#8217;s — a kernel vulnerability could theoretically allow container escape. But for most workloads, the trade-off is worth it. The operational benefits — fast startup, small image sizes, easy replication, portable images that run the same everywhere — have made containers the dominant packaging format for modern applications. From a sysadmin perspective, think of containers as very lightweight, portable, and reproducible application packages.</p>
<h2>The Dockerfile: Your Repeatable Build</h2>
<p>A Dockerfile is a script that defines how to build a container image. It starts from a base image (typically an official image from Docker Hub — things like <code>ubuntu:22.04</code>, <code>python:3.12-slim</code>, or <code>nginx:alpine</code>), then layers on commands that install dependencies, copy application code, set environment variables, and configure the entry point. Each command in a Dockerfile creates a new layer in the image, and Docker caches these layers — only layers that change need to be rebuilt.</p>
<p>Image hygiene matters for security. Use minimal base images (Alpine or distroless images where possible), avoid running processes as root inside the container, use multi-stage builds to keep build tooling out of production images, and regularly rebuild images to pick up upstream security patches. A multi-stage Dockerfile compiles code in one stage and copies only the compiled artifact into a slim final stage, keeping the production image lean and attack-surface-minimal.</p>
<pre><code># Multi-stage Dockerfile for a Go application
# Stage 1: build
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /bin/server ./cmd/server

# Stage 2: minimal runtime image
FROM gcr.io/distroless/static:nonroot
COPY --from=builder /bin/server /server
USER 65532:65532
EXPOSE 8080
ENTRYPOINT ["/server"]</code></pre>
<h2>Docker Compose for Multi-Container Apps</h2>
<p>Real applications rarely consist of a single container. A typical web stack might include an application container, a database, a cache (Redis), and a reverse proxy. Docker Compose lets you define and manage all of these as a single unit with a YAML file. With one command (<code>docker compose up -d</code>), Compose creates all containers, the networks between them, and any required volumes. It&#8217;s ideal for local development environments and small-scale production deployments.</p>
<p>Compose handles service dependencies (start the database before the app), environment variable injection, log aggregation, and scaling (spin up multiple app replicas behind a load balancer). For local development, Compose can bind-mount your source code directory into the container, so code changes are reflected immediately without rebuilding the image. This eliminates the &#8220;works on my machine&#8221; problem — everyone on the team uses the exact same environment defined in the Compose file.</p>
<pre><code># docker-compose.yml — Web app with PostgreSQL and Redis
services:
  app:
    build: .
    ports:
      - "8080:8080"
    environment:
      DATABASE_URL: postgres://appuser:secret@db:5432/appdb
      REDIS_URL: redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: appdb
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
      interval: 10s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    volumes:
      - redisdata:/data

volumes:
  pgdata:
  redisdata:</code></pre>
<h2>Data Persistence with Volumes</h2>
<p>Containers are ephemeral by default — when a container is removed, any data written inside it is lost. For stateful applications like databases, you need Docker volumes. Named volumes are managed by Docker and stored in a Docker-managed directory on the host. They survive container removal, can be backed up, and can be shared between containers. Named volumes are the recommended approach for production data.</p>
<p>Bind mounts map a specific path on the host filesystem directly into the container. They&#8217;re extremely useful during development (mount your source code in, see changes immediately) but less ideal for production because they create tight coupling between the container and the host filesystem layout. For production databases and stateful services, always use named volumes. For secrets and configuration, consider Docker secrets or environment variable injection rather than bind-mounting sensitive files from the host.</p>
<p>The post <a href="https://infotechninja.com/docker-for-sysadmins/">Docker for SysAdmins: Containers Without the Complexity</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">27</post-id>	</item>
		<item>
		<title>Getting Started with Ansible: Write Your First Playbook in 30 Minutes</title>
		<link>https://infotechninja.com/ansible-playbooks-beginner/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ansible-playbooks-beginner</link>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Ansible]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[IaC]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=10</guid>

					<description><![CDATA[<p>Ansible is the Swiss Army knife of infrastructure automation. No agents to install, no complex architecture to manage — just Python, SSH, and YAML. If you know your way around Linux, you can write a useful playbook today.</p>
<p>The post <a href="https://infotechninja.com/ansible-playbooks-beginner/">Getting Started with Ansible: Write Your First Playbook in 30 Minutes</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Ansible is the Swiss Army knife of infrastructure automation. No agents to install, no complex master-client architecture to manage, no proprietary DSL to learn. Just Python, SSH, and YAML. If you can write a shell script and you know your way around Linux, you can write a useful Ansible playbook today.</p>
<h2>No Agents, No Drama: The Ansible Model</h2>
<p>Ansible&#8217;s architecture is deliberately simple. The control node (your workstation, a bastion host, or a CI/CD runner) connects to managed nodes over SSH (or WinRM for Windows), executes tasks, and disconnects. There&#8217;s no agent process running on managed nodes consuming resources or requiring updates. The only requirement on managed nodes is Python (which is present on virtually every Linux system) and a user account with appropriate sudo privileges.</p>
<p>Idempotency is Ansible&#8217;s most important design principle. Every built-in module is designed so that running a task multiple times produces the same result as running it once. If nginx is already installed and running, the <code>package</code> and <code>service</code> modules report &#8220;OK&#8221; rather than reinstalling or restarting. This means you can run your playbooks safely at any time — to enforce desired state, to verify compliance, or to bring a drifted system back in line — without worrying about destructive side effects.</p>
<h2>Your Inventory File</h2>
<p>Ansible&#8217;s inventory defines the hosts and groups your playbooks target. The simplest format is an INI-style text file. You can also use YAML format or dynamic inventory plugins that query cloud provider APIs (AWS EC2, Azure VMs, GCP instances) to build the inventory automatically from your actual running infrastructure — no manual list maintenance required.</p>
<p>Groups make targeting flexible. You might have a <code>[webservers]</code> group, a <code>[dbservers]</code> group, and a <code>[production]</code> group containing hosts from both. Group variables (stored in <code>group_vars/</code> directories) let you define different settings per group — staging might use a self-signed certificate while production uses a real one, but the same playbook handles both.</p>
<pre><code># inventory/hosts.ini — Sample inventory file
[webservers]
web01.example.com ansible_user=ubuntu
web02.example.com ansible_user=ubuntu

[dbservers]
db01.example.com ansible_user=ubuntu ansible_port=2222

[production:children]
webservers
dbservers

[production:vars]
env=production
nginx_worker_processes=4

# Test connectivity to all hosts:
# ansible all -i inventory/hosts.ini -m ping</code></pre>
<h2>Writing Your First Playbook</h2>
<p>A playbook is a YAML file containing one or more plays. Each play targets a group of hosts and defines a list of tasks to execute in order. Tasks use modules — the unit of work in Ansible. There are thousands of built-in modules covering package management, file operations, service management, cloud resources, network devices, and more. The module name is self-documenting: <code>ansible.builtin.apt</code>, <code>ansible.builtin.copy</code>, <code>ansible.builtin.systemd</code>.</p>
<p>The playbook below installs nginx, deploys a custom configuration, and ensures the service is running and enabled. Run it with <code>ansible-playbook -i inventory/hosts.ini playbooks/nginx.yml</code>. Add <code>--check</code> for a dry run that shows what would change without making any changes.</p>
<pre><code># playbooks/nginx.yml — Install and configure nginx
---
- name: Configure web servers
  hosts: webservers
  become: true

  vars:
    nginx_port: 80
    server_name: "{{ inventory_hostname }}"

  tasks:
    - name: Ensure nginx is installed
      ansible.builtin.package:
        name: nginx
        state: present

    - name: Deploy nginx configuration
      ansible.builtin.template:
        src: templates/nginx.conf.j2
        dest: /etc/nginx/sites-available/default
        owner: root
        group: root
        mode: "0644"
      notify: Reload nginx

    - name: Ensure nginx is started and enabled
      ansible.builtin.systemd:
        name: nginx
        state: started
        enabled: true

  handlers:
    - name: Reload nginx
      ansible.builtin.systemd:
        name: nginx
        state: reloaded</code></pre>
<h2>Variables, Templates, and Handlers</h2>
<p>Variables in Ansible can be defined at multiple levels — in the playbook itself, in separate variable files, in the inventory, or passed on the command line with <code>-e</code>. Jinja2 templates (files ending in <code>.j2</code>) use double-brace syntax (<code>{{ variable_name }}</code>) to interpolate variables into configuration files. This is how one nginx template can produce correctly configured files for dozens of different servers, each with the right hostname, port, and SSL certificate path.</p>
<p>Handlers are tasks that only run when notified by another task. The classic use case: a task that updates a configuration file notifies the &#8220;Reload nginx&#8221; handler. If the configuration file didn&#8217;t change (because it was already correct), the handler never fires and nginx isn&#8217;t unnecessarily reloaded. If multiple tasks in a play all notify the same handler, the handler runs only once at the end of the play. This prevents redundant service restarts and makes your playbooks more efficient.</p>
<p>The post <a href="https://infotechninja.com/ansible-playbooks-beginner/">Getting Started with Ansible: Write Your First Playbook in 30 Minutes</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">28</post-id>	</item>
	</channel>
</rss>
