<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Automation Archives - InfoTech Ninja</title>
	<atom:link href="https://infotechninja.com/tag/automation/feed/" rel="self" type="application/rss+xml" />
	<link>https://infotechninja.com/tag/automation/</link>
	<description></description>
	<lastBuildDate>Tue, 07 Apr 2026 00:00:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>PowerShell 7: 10 Scripts Every SysAdmin Should Have in Their Toolkit</title>
		<link>https://infotechninja.com/powershell-7-sysadmin-scripts/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=powershell-7-sysadmin-scripts</link>
					<comments>https://infotechninja.com/powershell-7-sysadmin-scripts/#respond</comments>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[ActiveDirectory]]></category>
		<category><![CDATA[PowerShell]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<category><![CDATA[Windows]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=4</guid>

					<description><![CDATA[<p>PowerShell 7 (built on .NET 6+) is a genuine upgrade from Windows PowerShell 5.1. It's cross-platform, significantly faster for parallel workloads, and brings modern language features that make complex automation dramatically cleaner.</p>
<p>The post <a href="https://infotechninja.com/powershell-7-sysadmin-scripts/">PowerShell 7: 10 Scripts Every SysAdmin Should Have in Their Toolkit</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">PowerShell 7 (built on .NET 6+) is a genuine upgrade from Windows PowerShell 5.1. It&#8217;s cross-platform, significantly faster for parallel workloads, and brings modern language features that make complex automation dramatically cleaner. If you&#8217;re still defaulting to PS 5.1 out of habit, this article will convince you to make the switch and give you scripts worth keeping.</p>
<h2>Why PS7 Changes Everything for SysAdmins</h2>
<p>The headline feature for infrastructure work is <code>ForEach-Object -Parallel</code>. In PowerShell 5.1, looping over hundreds of servers to run a command was sequential — painfully slow when each operation involves a network call. In PS7, adding <code>-Parallel</code> to your ForEach-Object pipeline runs iterations concurrently (up to a configurable throttle limit), collapsing a 10-minute sequential run to under a minute. Combined with the <code>-ThrottleLimit</code> parameter, you get controlled parallelism without overwhelming your network or target systems.</p>
<p>PowerShell 7 also ships with null-coalescing operators (<code>??</code> and <code>??=</code>), pipeline chain operators (<code>&amp;&amp;</code> and <code>||</code>), ternary expressions, and significantly improved error handling. The <code>Get-Error</code> cmdlet provides structured, detailed error information that makes debugging complex scripts far easier. Module compatibility has improved too — most PS 5.1 modules work in PS7 via a compatibility shim, though a handful of modules that rely on Windows-only COM components remain PS5.1-only.</p>
<h2>Bulk AD User Management</h2>
<p>Managing Active Directory users at scale through the GUI is tedious and error-prone. PowerShell with the ActiveDirectory module makes bulk operations straightforward and auditable. Common tasks like disabling accounts for departed employees, resetting passwords, updating department attributes for an org restructure, or moving users between OUs all lend themselves to one-liners or short scripts that you can test in a non-production OU first.</p>
<p>The script below processes a CSV file of user updates — useful when HR sends over a spreadsheet of 200 employees who need their department and manager attributes updated after a reorg. Run it with <code>-WhatIf</code> first to preview changes without applying them, then remove the switch for the actual run.</p>
<pre><code># BulkUpdateADUsers.ps1 — Update AD attributes from CSV
# CSV columns: SamAccountName, Department, Manager, Title
#Requires -Modules ActiveDirectory

param(
    [Parameter(Mandatory)][string]$CsvPath,
    [switch]$WhatIf
)

$users = Import-Csv -Path $CsvPath
$results = [System.Collections.Concurrent.ConcurrentBag[object]]::new()

$users | ForEach-Object -Parallel {
    $bag = $using:results
    $whatIf = $using:WhatIf
    try {
        $params = @{
            Identity   = $_.SamAccountName
            Department = $_.Department
            Title      = $_.Title
            Manager    = (Get-ADUser $_.Manager).DistinguishedName
            WhatIf     = $whatIf.IsPresent
        }
        Set-ADUser @params
        $bag.Add([pscustomobject]@{ User=$_.SamAccountName; Status="OK" })
    } catch {
        $bag.Add([pscustomobject]@{ User=$_.SamAccountName; Status="FAIL: $_" })
    }
} -ThrottleLimit 20

$results | Export-Csv -Path ".\update-results.csv" -NoTypeInformation
Write-Host "Done. Results at .\update-results.csv"</code></pre>
<h2>Automated Patch Reporting</h2>
<p>Keeping track of patch status across a fleet of Windows servers is a common pain point. WSUS gives you a dashboard, but exporting useful reports for management or auditors is clunky. A PowerShell script that queries hotfix history across multiple servers and generates a clean report is something every Windows admin should have. The script below uses PS7&#8217;s parallel foreach to query multiple servers simultaneously, dramatically reducing the time it takes to gather data.</p>
<p>Combine this with a scheduled task or Azure Automation runbook to generate weekly patch compliance reports automatically. Export to CSV for easy import into Excel or your ITSM tool, or format as HTML for email distribution. Adding logic to flag servers that haven&#8217;t received updates in more than 30 days gives you an actionable compliance metric for your next audit.</p>
<pre><code># Get-PatchReport.ps1 — Query hotfix status across multiple servers
param([string[]]$Servers = @("SRV01","SRV02","SRV03"))

$report = $Servers | ForEach-Object -Parallel {
    $server = $_
    try {
        $hotfixes = Get-HotFix -ComputerName $server -ErrorAction Stop |
            Sort-Object InstalledOn -Descending |
            Select-Object -First 1
        [pscustomobject]@{
            Server       = $server
            LastPatch    = $hotfixes.HotFixID
            InstalledOn  = $hotfixes.InstalledOn
            DaysSince    = (New-TimeSpan -Start $hotfixes.InstalledOn -End (Get-Date)).Days
            Status       = "Online"
        }
    } catch {
        [pscustomobject]@{ Server=$server; LastPatch="N/A"; InstalledOn="N/A"; DaysSince=999; Status="Error: $_" }
    }
} -ThrottleLimit 10

$report | Sort-Object DaysSince -Descending | Format-Table -AutoSize
$report | Export-Csv ".\patch-report-$(Get-Date -f yyyyMMdd).csv" -NoTypeInformation</code></pre>
<h2>Calling REST APIs from PowerShell</h2>
<p><code>Invoke-RestMethod</code> is PowerShell&#8217;s built-in REST client, and it&#8217;s surprisingly capable. It automatically deserializes JSON responses into PowerShell objects, handles common authentication schemes, and supports all HTTP methods. Combined with PS7&#8217;s improved performance and parallelism, you can build lightweight integration scripts between your on-prem tooling and cloud APIs without pulling in external dependencies or standing up middleware.</p>
<p>A common use case: querying your monitoring tool&#8217;s API to get a list of alerts, then correlating them with your CMDB API to enrich the data before posting to a Teams channel via the incoming webhook API. Three API calls, all handled with <code>Invoke-RestMethod</code>, tied together in a script that runs every 15 minutes as a scheduled task. It&#8217;s not glamorous, but it&#8217;s the kind of practical automation that saves your team hours every week.</p>
<p>The post <a href="https://infotechninja.com/powershell-7-sysadmin-scripts/">PowerShell 7: 10 Scripts Every SysAdmin Should Have in Their Toolkit</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://infotechninja.com/powershell-7-sysadmin-scripts/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">22</post-id>	</item>
		<item>
		<title>Getting Started with Ansible: Write Your First Playbook in 30 Minutes</title>
		<link>https://infotechninja.com/ansible-playbooks-beginner/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ansible-playbooks-beginner</link>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Ansible]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[IaC]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=10</guid>

					<description><![CDATA[<p>Ansible is the Swiss Army knife of infrastructure automation. No agents to install, no complex architecture to manage — just Python, SSH, and YAML. If you know your way around Linux, you can write a useful playbook today.</p>
<p>The post <a href="https://infotechninja.com/ansible-playbooks-beginner/">Getting Started with Ansible: Write Your First Playbook in 30 Minutes</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Ansible is the Swiss Army knife of infrastructure automation. No agents to install, no complex master-client architecture to manage, no proprietary DSL to learn. Just Python, SSH, and YAML. If you can write a shell script and you know your way around Linux, you can write a useful Ansible playbook today.</p>
<h2>No Agents, No Drama: The Ansible Model</h2>
<p>Ansible&#8217;s architecture is deliberately simple. The control node (your workstation, a bastion host, or a CI/CD runner) connects to managed nodes over SSH (or WinRM for Windows), executes tasks, and disconnects. There&#8217;s no agent process running on managed nodes consuming resources or requiring updates. The only requirement on managed nodes is Python (which is present on virtually every Linux system) and a user account with appropriate sudo privileges.</p>
<p>Idempotency is Ansible&#8217;s most important design principle. Every built-in module is designed so that running a task multiple times produces the same result as running it once. If nginx is already installed and running, the <code>package</code> and <code>service</code> modules report &#8220;OK&#8221; rather than reinstalling or restarting. This means you can run your playbooks safely at any time — to enforce desired state, to verify compliance, or to bring a drifted system back in line — without worrying about destructive side effects.</p>
<h2>Your Inventory File</h2>
<p>Ansible&#8217;s inventory defines the hosts and groups your playbooks target. The simplest format is an INI-style text file. You can also use YAML format or dynamic inventory plugins that query cloud provider APIs (AWS EC2, Azure VMs, GCP instances) to build the inventory automatically from your actual running infrastructure — no manual list maintenance required.</p>
<p>Groups make targeting flexible. You might have a <code>[webservers]</code> group, a <code>[dbservers]</code> group, and a <code>[production]</code> group containing hosts from both. Group variables (stored in <code>group_vars/</code> directories) let you define different settings per group — staging might use a self-signed certificate while production uses a real one, but the same playbook handles both.</p>
<pre><code># inventory/hosts.ini — Sample inventory file
[webservers]
web01.example.com ansible_user=ubuntu
web02.example.com ansible_user=ubuntu

[dbservers]
db01.example.com ansible_user=ubuntu ansible_port=2222

[production:children]
webservers
dbservers

[production:vars]
env=production
nginx_worker_processes=4

# Test connectivity to all hosts:
# ansible all -i inventory/hosts.ini -m ping</code></pre>
<h2>Writing Your First Playbook</h2>
<p>A playbook is a YAML file containing one or more plays. Each play targets a group of hosts and defines a list of tasks to execute in order. Tasks use modules — the unit of work in Ansible. There are thousands of built-in modules covering package management, file operations, service management, cloud resources, network devices, and more. The module name is self-documenting: <code>ansible.builtin.apt</code>, <code>ansible.builtin.copy</code>, <code>ansible.builtin.systemd</code>.</p>
<p>The playbook below installs nginx, deploys a custom configuration, and ensures the service is running and enabled. Run it with <code>ansible-playbook -i inventory/hosts.ini playbooks/nginx.yml</code>. Add <code>--check</code> for a dry run that shows what would change without making any changes.</p>
<pre><code># playbooks/nginx.yml — Install and configure nginx
---
- name: Configure web servers
  hosts: webservers
  become: true

  vars:
    nginx_port: 80
    server_name: "{{ inventory_hostname }}"

  tasks:
    - name: Ensure nginx is installed
      ansible.builtin.package:
        name: nginx
        state: present

    - name: Deploy nginx configuration
      ansible.builtin.template:
        src: templates/nginx.conf.j2
        dest: /etc/nginx/sites-available/default
        owner: root
        group: root
        mode: "0644"
      notify: Reload nginx

    - name: Ensure nginx is started and enabled
      ansible.builtin.systemd:
        name: nginx
        state: started
        enabled: true

  handlers:
    - name: Reload nginx
      ansible.builtin.systemd:
        name: nginx
        state: reloaded</code></pre>
<h2>Variables, Templates, and Handlers</h2>
<p>Variables in Ansible can be defined at multiple levels — in the playbook itself, in separate variable files, in the inventory, or passed on the command line with <code>-e</code>. Jinja2 templates (files ending in <code>.j2</code>) use double-brace syntax (<code>{{ variable_name }}</code>) to interpolate variables into configuration files. This is how one nginx template can produce correctly configured files for dozens of different servers, each with the right hostname, port, and SSL certificate path.</p>
<p>Handlers are tasks that only run when notified by another task. The classic use case: a task that updates a configuration file notifies the &#8220;Reload nginx&#8221; handler. If the configuration file didn&#8217;t change (because it was already correct), the handler never fires and nginx isn&#8217;t unnecessarily reloaded. If multiple tasks in a play all notify the same handler, the handler runs only once at the end of the play. This prevents redundant service restarts and makes your playbooks more efficient.</p>
<p>The post <a href="https://infotechninja.com/ansible-playbooks-beginner/">Getting Started with Ansible: Write Your First Playbook in 30 Minutes</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">28</post-id>	</item>
		<item>
		<title>Bash Scripting for Linux SysAdmins: From Beginner to Dangerous</title>
		<link>https://infotechninja.com/bash-scripting-linux/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=bash-scripting-linux</link>
		
		<dc:creator><![CDATA[Morris James]]></dc:creator>
		<pubDate>Tue, 13 Jan 2026 00:00:00 +0000</pubDate>
				<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[Bash]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Scripting]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<guid isPermaLink="false">https://infotechninja.com/?p=13</guid>

					<description><![CDATA[<p>Bash scripting is the foundational automation skill for Linux administrators. Before you reach for Python or Ansible, a well-written shell script can handle the majority of day-to-day tasks. This guide goes from variables all the way to a production-ready disk monitoring script.</p>
<p>The post <a href="https://infotechninja.com/bash-scripting-linux/">Bash Scripting for Linux SysAdmins: From Beginner to Dangerous</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="entry-lead">Bash scripting is the foundational automation skill for Linux administrators. Before you reach for Python or Ansible, a well-written shell script can handle the majority of day-to-day automation tasks — and it runs natively on every Linux system without installing anything. This guide takes you from variables to a complete monitoring script that would be at home on any production server.</p>
<h2>Variables, Input, and Output</h2>
<p>Bash variables are untyped strings by default. Assign without spaces around the equals sign, and reference with a dollar sign prefix. Double-quote your variable references to prevent word splitting on values that contain spaces. Use <code>$(command)</code> syntax (command substitution) to capture command output into a variable. The <code>readonly</code> keyword makes a variable immutable — useful for constants in your scripts.</p>
<p>The <code>read</code> builtin captures user input interactively. For scripts that shouldn&#8217;t be interactive, pass values via positional parameters (<code>$1</code>, <code>$2</code>, etc.) or environment variables. Always validate input: check that required arguments were provided, that file paths exist before operating on them, and that numeric values are actually numeric. Failing fast with a clear error message is far better than letting a script proceed with bad input and silently corrupt data.</p>
<pre><code>#!/usr/bin/env bash
# Variables and I/O basics
set -euo pipefail   # exit on error, undefined vars, pipe failures

# Variables
SCRIPT_NAME="$(basename "$0")"
LOG_DIR="/var/log/myscripts"
DATESTAMP="$(date +%Y%m%d_%H%M%S)"
readonly LOG_DIR

# Positional arguments with default
TARGET_HOST="${1:-localhost}"
PORT="${2:-80}"

# Validate numeric
if ! [[ "$PORT" =~ ^[0-9]+$ ]]; then
    echo "ERROR: PORT must be a number, got: $PORT" >&2
    exit 1
fi

# Read from user interactively
read -rp "Enter backup label: " LABEL
echo "Backing up $TARGET_HOST:$PORT labeled '$LABEL' at $DATESTAMP"</code></pre>
<h2>Conditionals and File Tests</h2>
<p>Bash conditionals use the <code>test</code> command or its synonym <code>[[ ]]</code> (prefer the double-bracket form — it handles empty strings and special characters more safely). File tests are especially useful in sysadmin scripts: <code>-f</code> tests for a regular file, <code>-d</code> for a directory, <code>-r</code> for readable, <code>-w</code> for writable, <code>-x</code> for executable, and <code>-s</code> for non-empty. Combining tests with <code>&amp;&amp;</code> and <code>||</code> inside <code>[[ ]]</code> avoids spawning subshells for each test.</p>
<p>Always check for file existence before operating on it, check for required commands before assuming they&#8217;re available, and check return codes after operations that can fail. The <code>command -v</code> pattern checks whether a command exists in PATH without executing it. Exit codes matter: <code>0</code> is success, anything else is failure. Functions should <code>return</code> meaningful exit codes so callers can react appropriately.</p>
<pre><code>#!/usr/bin/env bash
# Conditionals and file tests

config_file="/etc/myapp/config.yaml"
backup_dir="/backup/$(date +%Y%m%d)"

# Check file exists and is readable
if [[ -f "$config_file" &amp;&amp; -r "$config_file" ]]; then
    echo "Config found: $config_file"
elif [[ -e "$config_file" ]]; then
    echo "ERROR: Config exists but is not readable" >&2; exit 1
else
    echo "ERROR: Config not found at $config_file" >&2; exit 1
fi

# Create backup dir if missing
[[ -d "$backup_dir" ]] || mkdir -p "$backup_dir"

# Check required command exists
if ! command -v rsync &amp;>/dev/null; then
    echo "ERROR: rsync not installed" >&2; exit 1
fi

# Numeric comparison
disk_used=$(df / | awk 'NR==2 {print $5}' | tr -d '%')
if (( disk_used > 90 )); then
    echo "WARNING: Root partition ${disk_used}% full"
fi</code></pre>
<h2>Loops: for, while, until</h2>
<p>Bash provides three loop constructs. The <code>for</code> loop iterates over a list or glob expansion — invaluable for processing multiple files, hosts, or values. The <code>while</code> loop continues as long as a condition is true, commonly used with <code>read</code> to process file contents line by line. The <code>until</code> loop is the inverse: it continues until a condition becomes true, useful for polling until a service becomes available or a file appears.</p>
<p>Avoid the common pitfall of <code>for file in $(ls /path/)</code> — it breaks on filenames with spaces. Use glob expansion directly: <code>for file in /path/*.log</code>. When reading files line by line, use the <code>while IFS= read -r line</code> pattern — the <code>IFS=</code> prevents leading/trailing whitespace from being stripped, and <code>-r</code> prevents backslash interpretation. These small details matter when your scripts need to handle real-world filenames and content.</p>
<pre><code>#!/usr/bin/env bash
# Loop examples

# for loop — process multiple servers
servers=("web01" "web02" "db01" "db02")
for server in "${servers[@]}"; do
    echo "Checking $server..."
    if ping -c1 -W2 "$server" &amp;>/dev/null; then
        echo "  $server: ONLINE"
    else
        echo "  $server: OFFLINE" >&2
    fi
done

# Process log file line by line
while IFS= read -r line; do
    if [[ "$line" == *"ERROR"* ]]; then
        echo "Found error: $line"
    fi
done < /var/log/app/app.log

# until loop — wait for a service to respond
port=5432
until nc -z localhost "$port" 2>/dev/null; do
    echo "Waiting for port $port to open..."
    sleep 3
done
echo "Service on port $port is ready."</code></pre>
<h2>Building a Disk Monitoring Script</h2>
<p>Putting the pieces together: a complete disk monitoring script that checks disk usage across specified mount points, logs warnings when thresholds are exceeded, and sends an email alert when a critical threshold is hit. Schedule it with cron to run every 15 minutes: <code>*/15 * * * * /usr/local/bin/disk-monitor.sh</code>.</p>
<pre><code>#!/usr/bin/env bash
# disk-monitor.sh — Check disk usage and alert on threshold breach
set -euo pipefail

# Configuration
WARN_THRESHOLD=80
CRIT_THRESHOLD=90
ALERT_EMAIL="admin@example.com"
LOG_FILE="/var/log/disk-monitor.log"
MOUNT_POINTS=("/" "/var" "/home" "/data")
HOSTNAME="$(hostname -f)"
TIMESTAMP="$(date '+%Y-%m-%d %H:%M:%S')"

# Function: log a message
log() { echo "[$TIMESTAMP] $*" | tee -a "$LOG_FILE"; }

# Function: send alert email
send_alert() {
    local mount="$1" used="$2"
    echo "CRITICAL: Disk usage on $HOSTNAME:$mount is at ${used}%" | \
        mail -s "[DISK ALERT] $HOSTNAME $mount at ${used}%" "$ALERT_EMAIL"
    log "ALERT sent for $mount at ${used}%"
}

alert_needed=false

for mp in "${MOUNT_POINTS[@]}"; do
    [[ -d "$mp" ]] || { log "SKIP: $mp not found"; continue; }
    used=$(df "$mp" | awk 'NR==2 {print $5}' | tr -d '%')

    if (( used >= CRIT_THRESHOLD )); then
        log "CRITICAL: $mp is at ${used}%"
        send_alert "$mp" "$used"
        alert_needed=true
    elif (( used >= WARN_THRESHOLD )); then
        log "WARNING: $mp is at ${used}%"
    else
        log "OK: $mp is at ${used}%"
    fi
done

$alert_needed &amp;&amp; exit 2
exit 0</code></pre>
<p>The post <a href="https://infotechninja.com/bash-scripting-linux/">Bash Scripting for Linux SysAdmins: From Beginner to Dangerous</a> appeared first on <a href="https://infotechninja.com">InfoTech Ninja</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">31</post-id>	</item>
	</channel>
</rss>
