<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Linux Archives - InfoTech Ninja</title>
	<atom:link href="https://infotechninja.com/tag/linux/feed/" rel="self" type="application/rss+xml" />
	<link>https://infotechninja.com/tag/linux/</link>
	<description></description>
	<lastBuildDate>Tue, 10 Mar 2026 00:00:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Docker for SysAdmins: Containers Without the Complexity</title>
		<link>https://infotechninja.com/docker-for-sysadmins/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=docker-for-sysadmins</link>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Cloud & DevOps]]></category>
		<category><![CDATA[Containers]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Microservices]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=9</guid>

					<description><![CDATA[<p>Containers felt like a developer's tool for a long time, but that line has blurred. Today, Docker knowledge is expected of infrastructure engineers. This guide covers everything a sysadmin needs to get productive with containers.</p>
<p>The post <a href="https://infotechninja.com/docker-for-sysadmins/">Docker for SysAdmins: Containers Without the Complexity</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Containers felt like a developer&#8217;s tool for a long time — something the app team used while sysadmins kept running VMs. That line has blurred considerably. Today, Docker knowledge is expected of infrastructure engineers, and understanding containers makes you far better at managing the platforms (Kubernetes, ECS, App Service) that run them at scale.</p>
<h2>How Containers Differ from VMs</h2>
<p>A virtual machine runs a full guest operating system on top of a hypervisor. Each VM has its own kernel, its own memory allocation, and its own virtual hardware — which is why VMs take minutes to boot and consume gigabytes of RAM even before your application starts. Containers take a different approach: they share the host OS kernel and use Linux kernel features (namespaces for isolation, cgroups for resource limits) to create isolated process environments. A container starts in seconds and consumes only the memory your application actually needs.</p>
<p>The isolation a container provides is less complete than a VM&#8217;s — a kernel vulnerability could theoretically allow container escape. But for most workloads, the trade-off is worth it. The operational benefits — fast startup, small image sizes, easy replication, portable images that run the same everywhere — have made containers the dominant packaging format for modern applications. From a sysadmin perspective, think of containers as very lightweight, portable, and reproducible application packages.</p>
<h2>The Dockerfile: Your Repeatable Build</h2>
<p>A Dockerfile is a script that defines how to build a container image. It starts from a base image (typically an official image from Docker Hub — things like <code>ubuntu:22.04</code>, <code>python:3.12-slim</code>, or <code>nginx:alpine</code>), then layers on commands that install dependencies, copy application code, set environment variables, and configure the entry point. Each command in a Dockerfile creates a new layer in the image, and Docker caches these layers — only layers that change need to be rebuilt.</p>
<p>Image hygiene matters for security. Use minimal base images (Alpine or distroless images where possible), avoid running processes as root inside the container, use multi-stage builds to keep build tooling out of production images, and regularly rebuild images to pick up upstream security patches. A multi-stage Dockerfile compiles code in one stage and copies only the compiled artifact into a slim final stage, keeping the production image lean and attack-surface-minimal.</p>
<pre><code># Multi-stage Dockerfile for a Go application
# Stage 1: build
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /bin/server ./cmd/server

# Stage 2: minimal runtime image
FROM gcr.io/distroless/static:nonroot
COPY --from=builder /bin/server /server
USER 65532:65532
EXPOSE 8080
ENTRYPOINT ["/server"]</code></pre>
<h2>Docker Compose for Multi-Container Apps</h2>
<p>Real applications rarely consist of a single container. A typical web stack might include an application container, a database, a cache (Redis), and a reverse proxy. Docker Compose lets you define and manage all of these as a single unit with a YAML file. With one command (<code>docker compose up -d</code>), Compose creates all containers, the networks between them, and any required volumes. It&#8217;s ideal for local development environments and small-scale production deployments.</p>
<p>Compose handles service dependencies (start the database before the app), environment variable injection, log aggregation, and scaling (spin up multiple app replicas behind a load balancer). For local development, Compose can bind-mount your source code directory into the container, so code changes are reflected immediately without rebuilding the image. This eliminates the &#8220;works on my machine&#8221; problem — everyone on the team uses the exact same environment defined in the Compose file.</p>
<pre><code># docker-compose.yml — Web app with PostgreSQL and Redis
services:
  app:
    build: .
    ports:
      - "8080:8080"
    environment:
      DATABASE_URL: postgres://appuser:secret@db:5432/appdb
      REDIS_URL: redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: appdb
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
      interval: 10s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    volumes:
      - redisdata:/data

volumes:
  pgdata:
  redisdata:</code></pre>
<h2>Data Persistence with Volumes</h2>
<p>Containers are ephemeral by default — when a container is removed, any data written inside it is lost. For stateful applications like databases, you need Docker volumes. Named volumes are managed by Docker and stored in a Docker-managed directory on the host. They survive container removal, can be backed up, and can be shared between containers. Named volumes are the recommended approach for production data.</p>
<p>Bind mounts map a specific path on the host filesystem directly into the container. They&#8217;re extremely useful during development (mount your source code in, see changes immediately) but less ideal for production because they create tight coupling between the container and the host filesystem layout. For production databases and stateful services, always use named volumes. For secrets and configuration, consider Docker secrets or environment variable injection rather than bind-mounting sensitive files from the host.</p>
<p>The post <a href="https://infotechninja.com/docker-for-sysadmins/">Docker for SysAdmins: Containers Without the Complexity</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">27</post-id>	</item>
		<item>
		<title>Getting Started with Ansible: Write Your First Playbook in 30 Minutes</title>
		<link>https://infotechninja.com/ansible-playbooks-beginner/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ansible-playbooks-beginner</link>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Ansible]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[IaC]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=10</guid>

					<description><![CDATA[<p>Ansible is the Swiss Army knife of infrastructure automation. No agents to install, no complex architecture to manage — just Python, SSH, and YAML. If you know your way around Linux, you can write a useful playbook today.</p>
<p>The post <a href="https://infotechninja.com/ansible-playbooks-beginner/">Getting Started with Ansible: Write Your First Playbook in 30 Minutes</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Ansible is the Swiss Army knife of infrastructure automation. No agents to install, no complex master-client architecture to manage, no proprietary DSL to learn. Just Python, SSH, and YAML. If you can write a shell script and you know your way around Linux, you can write a useful Ansible playbook today.</p>
<h2>No Agents, No Drama: The Ansible Model</h2>
<p>Ansible&#8217;s architecture is deliberately simple. The control node (your workstation, a bastion host, or a CI/CD runner) connects to managed nodes over SSH (or WinRM for Windows), executes tasks, and disconnects. There&#8217;s no agent process running on managed nodes consuming resources or requiring updates. The only requirement on managed nodes is Python (which is present on virtually every Linux system) and a user account with appropriate sudo privileges.</p>
<p>Idempotency is Ansible&#8217;s most important design principle. Every built-in module is designed so that running a task multiple times produces the same result as running it once. If nginx is already installed and running, the <code>package</code> and <code>service</code> modules report &#8220;OK&#8221; rather than reinstalling or restarting. This means you can run your playbooks safely at any time — to enforce desired state, to verify compliance, or to bring a drifted system back in line — without worrying about destructive side effects.</p>
<h2>Your Inventory File</h2>
<p>Ansible&#8217;s inventory defines the hosts and groups your playbooks target. The simplest format is an INI-style text file. You can also use YAML format or dynamic inventory plugins that query cloud provider APIs (AWS EC2, Azure VMs, GCP instances) to build the inventory automatically from your actual running infrastructure — no manual list maintenance required.</p>
<p>Groups make targeting flexible. You might have a <code>[webservers]</code> group, a <code>[dbservers]</code> group, and a <code>[production]</code> group containing hosts from both. Group variables (stored in <code>group_vars/</code> directories) let you define different settings per group — staging might use a self-signed certificate while production uses a real one, but the same playbook handles both.</p>
<pre><code># inventory/hosts.ini — Sample inventory file
[webservers]
web01.example.com ansible_user=ubuntu
web02.example.com ansible_user=ubuntu

[dbservers]
db01.example.com ansible_user=ubuntu ansible_port=2222

[production:children]
webservers
dbservers

[production:vars]
env=production
nginx_worker_processes=4

# Test connectivity to all hosts:
# ansible all -i inventory/hosts.ini -m ping</code></pre>
<h2>Writing Your First Playbook</h2>
<p>A playbook is a YAML file containing one or more plays. Each play targets a group of hosts and defines a list of tasks to execute in order. Tasks use modules — the unit of work in Ansible. There are thousands of built-in modules covering package management, file operations, service management, cloud resources, network devices, and more. The module name is self-documenting: <code>ansible.builtin.apt</code>, <code>ansible.builtin.copy</code>, <code>ansible.builtin.systemd</code>.</p>
<p>The playbook below installs nginx, deploys a custom configuration, and ensures the service is running and enabled. Run it with <code>ansible-playbook -i inventory/hosts.ini playbooks/nginx.yml</code>. Add <code>--check</code> for a dry run that shows what would change without making any changes.</p>
<pre><code># playbooks/nginx.yml — Install and configure nginx
---
- name: Configure web servers
  hosts: webservers
  become: true

  vars:
    nginx_port: 80
    server_name: "{{ inventory_hostname }}"

  tasks:
    - name: Ensure nginx is installed
      ansible.builtin.package:
        name: nginx
        state: present

    - name: Deploy nginx configuration
      ansible.builtin.template:
        src: templates/nginx.conf.j2
        dest: /etc/nginx/sites-available/default
        owner: root
        group: root
        mode: "0644"
      notify: Reload nginx

    - name: Ensure nginx is started and enabled
      ansible.builtin.systemd:
        name: nginx
        state: started
        enabled: true

  handlers:
    - name: Reload nginx
      ansible.builtin.systemd:
        name: nginx
        state: reloaded</code></pre>
<h2>Variables, Templates, and Handlers</h2>
<p>Variables in Ansible can be defined at multiple levels — in the playbook itself, in separate variable files, in the inventory, or passed on the command line with <code>-e</code>. Jinja2 templates (files ending in <code>.j2</code>) use double-brace syntax (<code>{{ variable_name }}</code>) to interpolate variables into configuration files. This is how one nginx template can produce correctly configured files for dozens of different servers, each with the right hostname, port, and SSL certificate path.</p>
<p>Handlers are tasks that only run when notified by another task. The classic use case: a task that updates a configuration file notifies the &#8220;Reload nginx&#8221; handler. If the configuration file didn&#8217;t change (because it was already correct), the handler never fires and nginx isn&#8217;t unnecessarily reloaded. If multiple tasks in a play all notify the same handler, the handler runs only once at the end of the play. This prevents redundant service restarts and makes your playbooks more efficient.</p>
<p>The post <a href="https://infotechninja.com/ansible-playbooks-beginner/">Getting Started with Ansible: Write Your First Playbook in 30 Minutes</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">28</post-id>	</item>
		<item>
		<title>Bash Scripting for Linux SysAdmins: From Beginner to Dangerous</title>
		<link>https://infotechninja.com/bash-scripting-linux/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=bash-scripting-linux</link>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 13 Jan 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[Bash]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Scripting]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=13</guid>

					<description><![CDATA[<p>Bash scripting is the foundational automation skill for Linux administrators. Before you reach for Python or Ansible, a well-written shell script can handle the majority of day-to-day tasks. This guide goes from variables all the way to a production-ready disk monitoring script.</p>
<p>The post <a href="https://infotechninja.com/bash-scripting-linux/">Bash Scripting for Linux SysAdmins: From Beginner to Dangerous</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Bash scripting is the foundational automation skill for Linux administrators. Before you reach for Python or Ansible, a well-written shell script can handle the majority of day-to-day automation tasks — and it runs natively on every Linux system without installing anything. This guide takes you from variables to a complete monitoring script that would be at home on any production server.</p>
<h2>Variables, Input, and Output</h2>
<p>Bash variables are untyped strings by default. Assign without spaces around the equals sign, and reference with a dollar sign prefix. Double-quote your variable references to prevent word splitting on values that contain spaces. Use <code>$(command)</code> syntax (command substitution) to capture command output into a variable. The <code>readonly</code> keyword makes a variable immutable — useful for constants in your scripts.</p>
<p>The <code>read</code> builtin captures user input interactively. For scripts that shouldn&#8217;t be interactive, pass values via positional parameters (<code>$1</code>, <code>$2</code>, etc.) or environment variables. Always validate input: check that required arguments were provided, that file paths exist before operating on them, and that numeric values are actually numeric. Failing fast with a clear error message is far better than letting a script proceed with bad input and silently corrupt data.</p>
<pre><code>#!/usr/bin/env bash
# Variables and I/O basics
set -euo pipefail   # exit on error, undefined vars, pipe failures

# Variables
SCRIPT_NAME="$(basename "$0")"
LOG_DIR="/var/log/myscripts"
DATESTAMP="$(date +%Y%m%d_%H%M%S)"
readonly LOG_DIR

# Positional arguments with default
TARGET_HOST="${1:-localhost}"
PORT="${2:-80}"

# Validate numeric
if ! [[ "$PORT" =~ ^[0-9]+$ ]]; then
    echo "ERROR: PORT must be a number, got: $PORT" >&2
    exit 1
fi

# Read from user interactively
read -rp "Enter backup label: " LABEL
echo "Backing up $TARGET_HOST:$PORT labeled '$LABEL' at $DATESTAMP"</code></pre>
<h2>Conditionals and File Tests</h2>
<p>Bash conditionals use the <code>test</code> command or its synonym <code>[[ ]]</code> (prefer the double-bracket form — it handles empty strings and special characters more safely). File tests are especially useful in sysadmin scripts: <code>-f</code> tests for a regular file, <code>-d</code> for a directory, <code>-r</code> for readable, <code>-w</code> for writable, <code>-x</code> for executable, and <code>-s</code> for non-empty. Combining tests with <code>&amp;&amp;</code> and <code>||</code> inside <code>[[ ]]</code> avoids spawning subshells for each test.</p>
<p>Always check for file existence before operating on it, check for required commands before assuming they&#8217;re available, and check return codes after operations that can fail. The <code>command -v</code> pattern checks whether a command exists in PATH without executing it. Exit codes matter: <code>0</code> is success, anything else is failure. Functions should <code>return</code> meaningful exit codes so callers can react appropriately.</p>
<pre><code>#!/usr/bin/env bash
# Conditionals and file tests

config_file="/etc/myapp/config.yaml"
backup_dir="/backup/$(date +%Y%m%d)"

# Check file exists and is readable
if [[ -f "$config_file" &amp;&amp; -r "$config_file" ]]; then
    echo "Config found: $config_file"
elif [[ -e "$config_file" ]]; then
    echo "ERROR: Config exists but is not readable" >&2; exit 1
else
    echo "ERROR: Config not found at $config_file" >&2; exit 1
fi

# Create backup dir if missing
[[ -d "$backup_dir" ]] || mkdir -p "$backup_dir"

# Check required command exists
if ! command -v rsync &amp;>/dev/null; then
    echo "ERROR: rsync not installed" >&2; exit 1
fi

# Numeric comparison
disk_used=$(df / | awk 'NR==2 {print $5}' | tr -d '%')
if (( disk_used > 90 )); then
    echo "WARNING: Root partition ${disk_used}% full"
fi</code></pre>
<h2>Loops: for, while, until</h2>
<p>Bash provides three loop constructs. The <code>for</code> loop iterates over a list or glob expansion — invaluable for processing multiple files, hosts, or values. The <code>while</code> loop continues as long as a condition is true, commonly used with <code>read</code> to process file contents line by line. The <code>until</code> loop is the inverse: it continues until a condition becomes true, useful for polling until a service becomes available or a file appears.</p>
<p>Avoid the common pitfall of <code>for file in $(ls /path/)</code> — it breaks on filenames with spaces. Use glob expansion directly: <code>for file in /path/*.log</code>. When reading files line by line, use the <code>while IFS= read -r line</code> pattern — the <code>IFS=</code> prevents leading/trailing whitespace from being stripped, and <code>-r</code> prevents backslash interpretation. These small details matter when your scripts need to handle real-world filenames and content.</p>
<pre><code>#!/usr/bin/env bash
# Loop examples

# for loop — process multiple servers
servers=("web01" "web02" "db01" "db02")
for server in "${servers[@]}"; do
    echo "Checking $server..."
    if ping -c1 -W2 "$server" &amp;>/dev/null; then
        echo "  $server: ONLINE"
    else
        echo "  $server: OFFLINE" >&2
    fi
done

# Process log file line by line
while IFS= read -r line; do
    if [[ "$line" == *"ERROR"* ]]; then
        echo "Found error: $line"
    fi
done < /var/log/app/app.log

# until loop — wait for a service to respond
port=5432
until nc -z localhost "$port" 2>/dev/null; do
    echo "Waiting for port $port to open..."
    sleep 3
done
echo "Service on port $port is ready."</code></pre>
<h2>Building a Disk Monitoring Script</h2>
<p>Putting the pieces together: a complete disk monitoring script that checks disk usage across specified mount points, logs warnings when thresholds are exceeded, and sends an email alert when a critical threshold is hit. Schedule it with cron to run every 15 minutes: <code>*/15 * * * * /usr/local/bin/disk-monitor.sh</code>.</p>
<pre><code>#!/usr/bin/env bash
# disk-monitor.sh — Check disk usage and alert on threshold breach
set -euo pipefail

# Configuration
WARN_THRESHOLD=80
CRIT_THRESHOLD=90
ALERT_EMAIL="admin@example.com"
LOG_FILE="/var/log/disk-monitor.log"
MOUNT_POINTS=("/" "/var" "/home" "/data")
HOSTNAME="$(hostname -f)"
TIMESTAMP="$(date '+%Y-%m-%d %H:%M:%S')"

# Function: log a message
log() { echo "[$TIMESTAMP] $*" | tee -a "$LOG_FILE"; }

# Function: send alert email
send_alert() {
    local mount="$1" used="$2"
    echo "CRITICAL: Disk usage on $HOSTNAME:$mount is at ${used}%" | \
        mail -s "[DISK ALERT] $HOSTNAME $mount at ${used}%" "$ALERT_EMAIL"
    log "ALERT sent for $mount at ${used}%"
}

alert_needed=false

for mp in "${MOUNT_POINTS[@]}"; do
    [[ -d "$mp" ]] || { log "SKIP: $mp not found"; continue; }
    used=$(df "$mp" | awk 'NR==2 {print $5}' | tr -d '%')

    if (( used >= CRIT_THRESHOLD )); then
        log "CRITICAL: $mp is at ${used}%"
        send_alert "$mp" "$used"
        alert_needed=true
    elif (( used >= WARN_THRESHOLD )); then
        log "WARNING: $mp is at ${used}%"
    else
        log "OK: $mp is at ${used}%"
    fi
done

$alert_needed &amp;&amp; exit 2
exit 0</code></pre>
<p>The post <a href="https://infotechninja.com/bash-scripting-linux/">Bash Scripting for Linux SysAdmins: From Beginner to Dangerous</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">31</post-id>	</item>
	</channel>
</rss>
