Home/Module 4

🐳 Module 4: Containers and Docker

Learning Objectives: You will understand the differences between containers and VMs, know the Docker architecture, be able to write Dockerfiles, use Docker Compose, and deploy containers for network automation.

⏱️ Duration: approx. 90 minutes | πŸ”§ Hands-on: 6 exercises

⚠️⚠️ Working Environment Changes: GitLab Linux Machine

From this module on, we will be working on the GitLab Linux machine! Docker is installed there, and that is where the containers will run later in the CI/CD pipeline.

How to connect: Open MRemoteNG on your desktop and connect to the GitLab machine (SSH).

MRemoteNG connection to the GitLab Linux machine
πŸ” Enlarge
MRemoteNG: Establishing connection to the GitLab machine

All commands and files in this and the following modules will be executed on the Linux console. We create files using vi.

Part 1: Containers vs. Virtual Machines

The Moving Boxes Analogy

Imagine you are moving. There are two strategies:

🏠

Strategy A: Take the Whole House (VM)

You do not just pack your furniture -- you take the entire house with you. Foundation, walls, roof, heating, everything. Works anywhere, but extremely heavy and slow to transport.

πŸ“¦

Strategy B: Moving Boxes (Container)

You only pack what you really need -- furniture, books, clothing. The new house (operating system) is already there. Fast, light, efficient.

Technical Architecture Comparison

The fundamental difference lies in the virtualization layer:

Virtual Machine (VM):
App 1
App 2
App 3
Guest OS
Guest OS
Guest OS
Hypervisor (VMware, KVM, Hyper-V)
Host Operating System
Hardware
Container:
App 1
App 2
App 3
Container Runtime (Docker Engine)
Host Operating System (Linux Kernel)
Hardware

Comparison Table: VM vs. Container

PropertyVirtual MachineContainer
Startup TimeMinutesSeconds
SizeGigabytes (10-50 GB)Megabytes (50-500 MB)
ResourcesFixed reservationDynamically shared
IsolationComplete (own OS)Process-level
Operating SystemAny (Linux, Windows)Shares host kernel
Instances per Host10-20100-1000+
πŸ’‘When to use what?
  • VMs: When you need a different OS (Windows on Linux), complete isolation is critical, or legacy software is running
  • Containers: For microservices, CI/CD pipelines, consistent development environments, and when speed matters

The Magic Behind Containers: Linux Kernel Features

Containers are not new technology -- they leverage Linux features that have existed for years:

  • Namespaces: Isolate processes, network, filesystem (each container only sees "its own" system)
  • cgroups: Limit CPU, RAM, I/O per container
  • Union Filesystems: Enable layer-based images (only store changes)

Part 2: Understanding Docker Architecture

Docker is like a ready-meal system: There are recipes (Dockerfiles), ready-made meals in the fridge (images), and the microwave to heat them up (Docker Engine).

The Docker Components

βš™οΈ

Docker Daemon (dockerd)

The background service that creates, starts, and manages containers. Runs as a root process on the host.

πŸ’»

Docker CLI

The command line (docker command). Communicates with the daemon via a REST API.

πŸ“¦

Docker Images

Read-only templates with all dependencies. Consist of layers -- each change is stacked as a new layer.

πŸƒ

Container

A running instance of an image. Has a writable layer on top (discarded when deleted).

πŸͺ

Registry

Image storage. Docker Hub (public), or private registries (GitLab Container Registry, Harbor, Nexus).

πŸ“

Dockerfile

Text file with build instructions for an image. Each line creates a layer.

The Docker Workflow Visualized


β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    docker build    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Dockerfile β”‚ ──────────────────▢ β”‚   Image     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                     β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
                                           β”‚
                                    docker push
                                           β”‚
                                           β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Registry   β”‚ ◀──────────────────│ Docker Hub  β”‚
β”‚  (GitLab)   β”‚     docker pull     β”‚   / Private β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚
docker run
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Container  β”‚ ◀── Running Instance
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Image Layers: The Onion Principle

Images consist of layers -- like an onion. Each command in the Dockerfile creates a new layer:


β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Layer 5: CMD ansible-playbook     β”‚  ← Start command
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 4: RUN ansible-galaxy...    β”‚  ← Collections
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 3: RUN pip install...       β”‚  ← Python packages
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 2: RUN apt-get update...    β”‚  ← System packages
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 1: python:3.11-slim         β”‚  ← Base image
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
πŸ’‘Why Layers Matter
  • Caching: Unchanged layers are reused β†’ faster builds
  • Sharing: Multiple images share the same base layers β†’ less storage
  • Best Practice: Place frequently changing commands at the end of the Dockerfile

Part 3: Mastering Docker Commands

Managing Images

Output of the docker images command with a list of local images
πŸ” Enlarge
docker images shows all locally available images with tags and sizes
# Download image from Docker Hub
docker pull python:3.11

# With specific tag
docker pull python:3.11-slim

# Show all local images
docker images

# Delete image
docker rmi python:3.11

# Clean up unused images
docker image prune -a

Starting and Managing Containers

Output of docker ps with running containers and their status
πŸ” Enlarge
docker ps shows all running containers with ID, image, status, and ports
# Start container interactively (with terminal)
docker run -it python:3.11 bash

# Start container in the background
docker run -d --name my-python python:3.11 sleep infinity

# Container with automatic deletion after exit
docker run --rm -it python:3.11 python -c "print('Hello!')"

# Show running containers
docker ps

# All containers (including stopped)
docker ps -a

# Stop container
docker stop my-python

# Start container (after stop)
docker start my-python

# Delete container
docker rm my-python

# Delete all stopped containers
docker container prune

Connecting to Running Containers

# Open shell in running container
docker exec -it my-python bash

# Execute a single command
docker exec my-python cat /etc/os-release

# Log in as root (if user is different)
docker exec -it --user root my-python bash

Files and Volumes

# Mount directory into container (Bind Mount)
docker run -it -v ${PWD}:/workspace python:3.11 bash

# Create and use named volume
docker volume create my-volume
docker run -it -v my-volume:/data python:3.11 bash

# Copy file into container
docker cp local-file.txt my-python:/tmp/

# Copy file from container
docker cp my-python:/tmp/result.txt ./

Networking

# Port forwarding (Host:Container)
docker run -d -p 8080:80 nginx

# Container in host network (no NAT)
docker run --network host nginx

# Create custom network
docker network create my-network

# Start container in network
docker run -d --network my-network --name web nginx

# Show networks
docker network ls
πŸ”§Exercise 1: Check Docker Version and Start First Containers (~10 Min)

Get familiar with Docker:

  1. Check Docker installation:
    # Show version
    docker --version
    
    # Detailed info
    docker info
    
    # Run test container
    docker run hello-world
    Terminal with docker --version and docker info output
    πŸ” Enlarge
    docker info shows details about the Docker installation and configuration
  2. Start an interactive Ubuntu container:
    # Ubuntu container with Bash
    docker run -it ubuntu:22.04 bash
    
    # Inside the container: explore the system
    cat /etc/os-release
    whoami
    pwd
    ls -la
    
    # Exit the container
    exit
  3. Understand the container lifecycle:
    # Start container in the background
    docker run -d --name test-container ubuntu:22.04 sleep 300
    
    # Check status
    docker ps
    
    # Connect to container
    docker exec -it test-container bash
    exit
    
    # Stop and delete container
    docker stop test-container
    docker rm test-container

Part 4: Writing Dockerfiles

A Dockerfile is like a cooking recipe: Step by step, it describes how the image is built.

Dockerfile Commands

CommandDescriptionExample
FROMBase imageFROM python:3.11-slim
RUNExecute command (during build)RUN pip install ansible
COPYCopy files into imageCOPY requirements.txt /app/
ADDLike COPY, can also handle URLs/archivesADD app.tar.gz /app/
WORKDIRSet working directoryWORKDIR /app
ENVSet environment variableENV ANSIBLE_HOST_KEY_CHECKING=False
EXPOSEDocument portEXPOSE 8080
CMDDefault command (overridable)CMD ["python", "app.py"]
ENTRYPOINTMain command (not overridable)ENTRYPOINT ["ansible-playbook"]

Example: Python Network Tools Container

Dockerfile opened in VS Code with syntax highlighting
πŸ” Enlarge
A Dockerfile in VS Code with color syntax highlighting
πŸ“„ Dockerfile
# Base image: Slim Python
FROM python:3.11-slim

# Metadata
LABEL maintainer="netzwerk-team@firma.de"
LABEL description="Python mit Netzwerk-Automatisierungs-Tools"

# Install system packages
RUN apt-get update && apt-get install -y --no-install-recommends \
    openssh-client \
    iputils-ping \
    dnsutils \
    curl \
    && rm -rf /var/lib/apt/lists/*

# Install Python packages
COPY requirements.txt /tmp/
RUN pip install --no-cache-dir -r /tmp/requirements.txt

# Working directory
WORKDIR /workspace

# Default command
CMD ["python3"]
πŸ“„ requirements.txt
netmiko==4.2.0
napalm==4.1.0
paramiko==3.4.0
netaddr==0.9.0
jinja2==3.1.2
pyyaml==6.0.1
requests==2.31.0
nornir==3.4.1
nornir-netmiko==1.0.1

Building and Tagging Images

Terminal showing docker build output with layer build process
πŸ” Enlarge
docker build shows each layer step and the build progress
# Build image (in directory with Dockerfile)
docker build -t network-tools:latest .

# Tag with version
docker build -t network-tools:1.0.0 .

# Tag for GitLab Registry
docker build -t 198.18.133.100:5050/workshop/network-tools:1.0.0 .

# Ignore build cache (clean build)
docker build --no-cache -t network-tools:latest .
πŸ”§Exercise 2: Python Container with Network Libraries (~15 Min)
  1. Create project directory:
    mkdir docker-netzwerk
    cd docker-netzwerk
  2. Create requirements.txt:
    vi requirements.txt

    Insert the following content (press i to enter insert mode, then :wq to save):

    πŸ“„ requirements.txt
    netmiko==4.2.0
    netaddr==0.9.0
    paramiko==3.4.0
    jinja2==3.1.2
    pyyaml==6.0.1
  3. Create Dockerfile:
    vi Dockerfile
    πŸ“„ Dockerfile
    FROM python:3.11-slim
    
    RUN apt-get update && apt-get install -y --no-install-recommends \
        openssh-client iputils-ping \
        && rm -rf /var/lib/apt/lists/*
    
    COPY requirements.txt /tmp/
    RUN pip install --no-cache-dir -r /tmp/requirements.txt
    
    WORKDIR /workspace
    CMD ["python3"]
  4. Build image:
    docker build -t netzwerk-python:1.0 .
  5. Test container:
    docker run -it --rm netzwerk-python:1.0
    πŸ“„ In the Python prompt
    from netaddr import IPNetwork
    for ip in IPNetwork('192.168.1.0/30'):
        print(ip)
    
    exit()
πŸ”§Exercise 3: Custom Dockerfile for Network Analysis (~15 Min)

Build a container with network analysis tools:

  1. Create new directory:
    mkdir docker-nettools
    cd docker-nettools
    vi Dockerfile
    πŸ“„ Dockerfile
    FROM ubuntu:22.04
    
    LABEL maintainer="your-name"
    LABEL purpose="Network Troubleshooting Tools"
    
    # Avoid interactive prompts
    ENV DEBIAN_FRONTEND=noninteractive
    
    # Install network tools
    RUN apt-get update && apt-get install -y --no-install-recommends \
        iputils-ping \
        traceroute \
        dnsutils \
        netcat-openbsd \
        curl \
        wget \
        tcpdump \
        nmap \
        iproute2 \
        net-tools \
        iperf3 \
        mtr \
        && rm -rf /var/lib/apt/lists/*
    
    WORKDIR /work
    
    CMD ["/bin/bash"]
  2. Build image:
    docker build -t nettools:1.0 .
  3. Test tools:
    # Start container
    docker run -it --rm nettools:1.0
    
    # Inside the container:
    ping -c 3 8.8.8.8
    dig google.com
    traceroute google.com
    exit

Part 5: Volumes and Persistent Data

Containers are ephemeral like soap bubbles -- when they are gone, the data is gone too. For persistent data you need volumes.

Three Types of Volumes

πŸ“

Bind Mounts

Mount a host directory directly. Good for development.

-v /host/path:/container/path
πŸ’Ύ

Named Volumes

Docker-managed volumes. Portable and simple.

-v volume-name:/container/path
πŸ’¨

tmpfs Mounts

In RAM, not persistent. For temporary/sensitive data.

--tmpfs /container/path
# Bind Mount: Mount local directory
docker run -it -v ${PWD}:/workspace python:3.11 bash

# Create and use named volume
docker volume create ansible-data
docker run -it -v ansible-data:/data python:3.11 bash

# Inspect volume
docker volume inspect ansible-data

# Show all volumes
docker volume ls

# Delete unused volumes
docker volume prune
πŸ”§Exercise 4: Mount a Container with a Volume (~10 Min)
  1. Create directory:
    mkdir docker-volume-test
    cd docker-volume-test
    vi hello.py
    πŸ“„ hello.py
    #!/usr/bin/env python3
    from datetime import datetime
    from netaddr import IPNetwork
    
    print(f"=== Network Analysis Tool ===")
    print(f"Executed: {datetime.now()}")
    print()
    
    subnet = "10.100.0.0/29"
    print(f"Subnet: {subnet}")
    print("Usable IPs:")
    for ip in IPNetwork(subnet).iter_hosts():
        print(f"  - {ip}")
  2. Run with Bind Mount:
    docker run --rm -v ${PWD}:/workspace netzwerk-python:1.0 python /workspace/hello.py
  3. Develop interactively:
    # Start container, directory mounted
    docker run -it --rm -v ${PWD}:/workspace netzwerk-python:1.0 bash
    
    # Inside the container: edit and test script
    cd /workspace
    python hello.py
    
    # Changes are visible on the host!
    exit
  4. Named Volume for persistent data:
    # Create volume
    docker volume create test-data
    
    # Write data into it
    docker run --rm -v test-data:/data python:3.11 bash -c "echo 'Persistent!' > /data/test.txt"
    
    # Read data (in a new container!)
    docker run --rm -v test-data:/data python:3.11 cat /data/test.txt
    
    # Clean up volume
    docker volume rm test-data

Part 6: Ansible in a Container

Running Ansible in a container brings enormous advantages:Consistent versions, no dependency conflicts, and perfect for CI/CD pipelines.

Ansible Container Dockerfile

πŸ“„ Dockerfile.ansible
# Ansible Container fΓΌr NDFC/Netzwerk-Automatisierung
FROM python:3.11-slim

# Labels fΓΌr Wartbarkeit
LABEL maintainer="netzwerk-team@firma.de"
LABEL version="1.0"
LABEL description="Ansible mit Cisco DCNM/NDFC Collections"

# Umgebungsvariablen
ENV ANSIBLE_HOST_KEY_CHECKING=False
ENV ANSIBLE_RETRY_FILES_ENABLED=False
ENV PYTHONUNBUFFERED=1

# System-AbhΓ€ngigkeiten
RUN apt-get update && apt-get install -y --no-install-recommends \
    openssh-client \
    sshpass \
    git \
    && rm -rf /var/lib/apt/lists/*

# Python-Pakete installieren
RUN pip install --no-cache-dir \
    ansible-core==2.15.* \
    ansible-pylibssh \
    jmespath \
    netaddr \
    requests \
    paramiko

# Ansible Collections installieren
RUN ansible-galaxy collection install \
    cisco.dcnm \
    ansible.netcommon \
    ansible.utils

# Arbeitsverzeichnis
WORKDIR /ansible

# Ansible-Version anzeigen beim Start
CMD ["ansible", "--version"]

Build and Use the Container

# Build image
docker build -t ansible-ndfc:1.0 -f Dockerfile.ansible .

# Check Ansible version
docker run --rm ansible-ndfc:1.0

# Run playbook (with mounted files)
docker run --rm \
    -v ${PWD}:/ansible \
    -v $env:USERPROFILE.ssh:/root/.ssh:ro \
    ansible-ndfc:1.0 \
    ansible-playbook -i inventory.yml playbook.yml

# Interactive for testing
docker run -it --rm \
    -v ${PWD}:/ansible \
    ansible-ndfc:1.0 bash
πŸ”§Exercise 5: Run Ansible in a Container (~15 Min)
  1. Create Ansible container directory:
    mkdir docker-ansible
    cd docker-ansible
  2. Create Dockerfile:
    vi Dockerfile
    πŸ“„ Dockerfile
    FROM python:3.11-slim
    
    ENV ANSIBLE_HOST_KEY_CHECKING=False
    ENV PYTHONUNBUFFERED=1
    
    RUN pip install --no-cache-dir \
        ansible-core==2.15.* \
        jmespath \
        netaddr
    
    RUN ansible-galaxy collection install ansible.utils
    
    WORKDIR /ansible
    CMD ["ansible", "--version"]
    docker build -t ansible-test:1.0 .
  3. Create test playbook:
    vi playbook.yml
    πŸ“„ playbook.yml
    ---
    - name: Container-Test Playbook
      hosts: localhost
      connection: local
      gather_facts: yes
    
      vars:
        vlans:
          - id: 100
            name: SERVERS
          - id: 200
            name: CLIENTS
          - id: 300
            name: MANAGEMENT
    
      tasks:
        - name: System-Informationen anzeigen
          ansible.builtin.debug:
            msg: |
              Ansible lΓ€uft in Container!
              Hostname: {{ ansible_hostname }}
              Python: {{ ansible_python_version }}
    
        - name: VLAN-Konfiguration generieren
          ansible.builtin.debug:
            msg: "VLAN {{ item.id }}: {{ item.name }}"
          loop: "{{ vlans }}"
    
        - name: IP-Berechnung mit netaddr
          ansible.builtin.debug:
            msg: "Gateway fΓΌr 10.{{ item.id }}.0.0/24: 10.{{ item.id }}.0.1"
          loop: "{{ vlans }}"
  4. Run playbook in container:
    docker run --rm -v ${PWD}:/ansible ansible-test:1.0 ansible-playbook playbook.yml
  5. Interactive Ansible shell:
    docker run -it --rm -v ${PWD}:/ansible ansible-test:1.0 bash
    
    # Inside the container:
    ansible localhost -m debug -a "msg='Hello from Container!'"
    ansible localhost -m setup | head -50
    exit

Part 7: Docker Compose

Docker Compose is like an orchestra conductor: It starts multiple containers simultaneously and coordinates them.

Why Docker Compose?

  • Multi-Container Apps: Web server + database + cache simultaneously
  • Declarative: YAML file describes the desired state
  • Networking: Containers can reach each other by name
  • Development: Start complex setups with a single command

docker-compose.yml Structure

πŸ“„ docker-compose.yml
version: '3.8'

services:
  # Service 1: Web application
  web:
    image: nginx:alpine
    ports:
      - "8080:80"
    volumes:
      - ./html:/usr/share/nginx/html:ro
    depends_on:
      - api

  # Service 2: API Backend
  api:
    build: ./api
    environment:
      - DATABASE_URL=postgres://db:5432/app
    depends_on:
      - db

  # Service 3: Database
  db:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: secret
    volumes:
      - db-data:/var/lib/postgresql/data

volumes:
  db-data:

Compose Commands

# Start all services
docker compose up

# Start in the background
docker compose up -d

# Show logs
docker compose logs -f

# Check status
docker compose ps

# Connect to a service
docker compose exec web sh

# Stop everything
docker compose down

# Stop and delete volumes
docker compose down -v

Network Automation with Compose

πŸ“„ docker-compose.yml
version: '3.8'

services:
  # Ansible Automation Container
  ansible:
    build: .
    volumes:
      - ./playbooks:/ansible/playbooks
      - ./inventory:/ansible/inventory
      - ./group_vars:/ansible/group_vars
      - $env:USERPROFILE.ssh:/root/.ssh:ro
    environment:
      - ANSIBLE_HOST_KEY_CHECKING=False
      - NDFC_HOST=${NDFC_HOST}
      - NDFC_USER=${NDFC_USER}
      - NDFC_PASSWORD=${NDFC_PASSWORD}
    working_dir: /ansible
    command: tail -f /dev/null  # Keep container alive

  # Git server for local tests (optional)
  gitea:
    image: gitea/gitea:latest
    ports:
      - "3000:3000"
    volumes:
      - gitea-data:/data

volumes:
  gitea-data:
πŸ”§Exercise 6: Docker Compose with Troubleshooting (~20 Min)

In this exercise, you will create a multi-container setup with Docker Compose -- but there will be a deliberate error built in! Use your troubleshooting skills to find and fix the problem.

  1. Create project directory:
    mkdir compose-demo
    cd compose-demo
  2. Create docker-compose.yml:
    vi docker-compose.yml
    πŸ“„ docker-compose.yml
    version: '3.8'
    
    services:
      # Python Network Tools
      nettools:
        image: python:3.11-slim
        volumes:
          - ./scripts:/scripts
        working_dir: /scripts
        command: tail -f /dev/null
        networks:
          - automation-net
    
      # Web UI for documentation
      docs:
        image: nginx:alpine
        ports:
          - "8080:80"
        volumes:
          - ./docs:/usr/share/nginx/html:ro
        networks:
          - automation-net
    
      # Redis for caching (example)
      cache:
        image: redis:alpine
        networks:
          - automation-net
    
    networks:
      automation-net:
        driver: bridge
  3. Create files for services:
    mkdir scripts
    mkdir docs
    vi scripts/analyze.py
    πŸ“„ scripts/analyze.py
    #!/usr/bin/env python3
    import socket
    print("=== Container Network Info ===")
    print(f"Hostname: {socket.gethostname()}")
    print(f"IP: {socket.gethostbyname(socket.gethostname())}")
    print()
    # Test: Reach other containers
    for host in ['docs', 'cache']:
        try:
            ip = socket.gethostbyname(host)
            print(f"{host}: {ip} βœ“")
        except:
            print(f"{host}: not reachable")
    vi docs/index.html
    πŸ“„ docs/index.html
    <!DOCTYPE html>
    <html>
    <head><title>Automation Docs</title></head>
    <body>
    <h1>πŸš€ Netzwerk-Automatisierung</h1>
    <p>Dokumentation lΓ€uft in einem Container!</p>
    <ul>
      <li>nettools: Python mit Netzwerk-Libraries</li>
      <li>docs: Diese Nginx-Seite</li>
      <li>cache: Redis fΓΌr Caching</li>
    </ul>
    </body>
    </html>
  4. Start services:
    docker compose up -d
    πŸ’₯ An error occurs here!

    Port 8080 is already in use on this machine by GitLab. Docker will output an error message that the port cannot be bound.

  5. πŸ” Troubleshooting -- Find the problem:

    Use the commands from Part 8 to diagnose the error:

    # Check status β€” which service is not running?
    docker compose ps
    
    # Check logs β€” what does the error message say?
    docker compose logs docs
    
    # Which process is using port 8080?
    ss -tlnp | grep 8080
    πŸ’‘ Show Solution

    Problem: Port 8080 is already occupied by GitLab. The docs service cannot start.

    Solution: Change the port mapping in docker-compose.yml to a free port:

    # Stop services
    docker compose down
    
    # Edit docker-compose.yml: change port
    vi docker-compose.yml
    # Change "8080:80" to "8888:80"
    
    # Restart
    docker compose up -d
    
    # Now it should work!
    docker compose ps
  6. Test services (after the fix):
    # Open Web UI in browser: http://198.18.133.100:8888
    
    # Run Python script in container
    docker compose exec nettools python /scripts/analyze.py
    
    # Test Redis
    docker compose exec cache redis-cli ping
  7. Clean up:
    docker compose down
βœ… Success Criterion: You have independently diagnosed the port conflict, changed the port in docker-compose.yml, and successfully accessed the web UI in the browser.

Part 8: Docker Troubleshooting

When containers do not do what they should, these commands are your best friends:

Analyzing Container Logs

# Show logs of a container
docker logs container-name

# Live logs (like tail -f)
docker logs -f container-name

# Last 100 lines
docker logs --tail 100 container-name

# With timestamps
docker logs -t container-name

# Logs since a specific time
docker logs --since 10m container-name

Inspecting Containers

# All info about a container
docker inspect container-name

# Extract specific info (IP address)
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container-name

# Show environment variables
docker inspect -f '{{range .Config.Env}}{{println .}}{{end}}' container-name

# Show mounted volumes
docker inspect -f '{{range .Mounts}}{{.Source}} -> {{.Destination}}{{println}}{{end}}' container-name

Debugging Inside a Running Container

# Open shell in container
docker exec -it container-name bash
# Or for Alpine-based images:
docker exec -it container-name sh

# Log in as root
docker exec -it --user root container-name bash

# Show processes in container
docker exec container-name ps aux

# Check network status
docker exec container-name netstat -tlnp
docker exec container-name cat /etc/resolv.conf

Monitoring Resource Usage

Terminal with docker stats showing CPU, memory, and network usage
πŸ” Enlarge
docker stats shows live resource consumption of all containers
# Live statistics for all containers
docker stats

# For specific containers
docker stats container-name

# One-time (not live)
docker stats --no-stream

Common Problems and Solutions

ProblemDiagnosisSolution
Container does not startdocker logsCheck error message, often missing env vars
Port already in usenetstat -tlnpChoose a different port: -p 8081:80
Volume permission issuels -la /mountchmod on host or --user
Container exits immediatelydocker ps -aCheck CMD/ENTRYPOINT, process not running
No internet in containerdocker exec ... ping 8.8.8.8Check Docker network or firewall
Image too largedocker history imageMulti-stage build, use slim image
⚠️Container Debugging Tips
  • Container exits immediately? Start with tail -f /dev/null as CMD
  • No shell in the container? Use docker cp to extract files
  • Network issues? Test with --network host

Part 9: Best Practices for Containers in CI/CD

Dockerfile Optimization

πŸ“„ Dockerfile (optimized)
# βœ… GOOD: Use specific version
FROM python:3.11.7-slim-bookworm

# βœ… GOOD: Combined RUN commands (fewer layers)
RUN apt-get update && apt-get install -y --no-install-recommends \
    openssh-client \
    && rm -rf /var/lib/apt/lists/*

# βœ… GOOD: Requirements first (leverage cache)
COPY requirements.txt /tmp/
RUN pip install --no-cache-dir -r /tmp/requirements.txt

# βœ… GOOD: Code last (changes often)
COPY . /app

# βœ… GOOD: Non-root user
RUN useradd -m appuser
USER appuser

# βœ… GOOD: Define HEALTHCHECK
HEALTHCHECK --interval=30s --timeout=3s \
    CMD curl -f http://localhost:8080/health || exit 1

Reducing Image Size

πŸ“„ Multi-Stage Build
# Stage 1: Build
FROM python:3.11 AS builder

COPY requirements.txt .
RUN pip wheel --no-cache-dir --wheel-dir /wheels -r requirements.txt

# Stage 2: Runtime (slim)
FROM python:3.11-slim

COPY --from=builder /wheels /wheels
RUN pip install --no-cache-dir /wheels/*

COPY app/ /app/
CMD ["python", "/app/main.py"]

Security Best Practices

βœ… Do

  • β€’ Use specific image tags
  • β€’ Use non-root user
  • β€’ Secrets as env vars (not in image)
  • β€’ Update images regularly
  • β€’ Minimal base images (slim, alpine)
  • β€’ Use .dockerignore

❌ Avoid

  • β€’ latest tag in production
  • β€’ Root in container
  • β€’ Passwords in Dockerfile
  • β€’ Outdated base images
  • β€’ Installing unnecessary packages
  • β€’ SSH in container

Why Docker in CI/CD Pipelines?

Imagine: An Ansible playbook runs perfectly on a colleague's laptop, but fails on the CI/CD server. Why? Different Python version, missing libraries, different Ansible collections. Docker in pipelines solves exactly this problem.

🎯 The Core Benefits

πŸ”„ Defined State on Every Run

Every pipeline run starts with a fresh container from the same image. No "leftovers" from previous runs -- no "it works on my machine".

πŸ“Œ Environment Versioning

The Dockerfile is the documentation of the environment. Python 3.11, Ansible 2.15, cisco.dcnm Collection 3.5 -- all recorded and reproducible.

πŸ§ͺ Identical Environment Everywhere

Developer laptop, staging, production -- the same container everywhere. What works locally also works in the pipeline.

πŸš€ Fast, Clean Pipelines

No pip install on every run -- everything is already in the image. Pipeline starts in seconds instead of minutes.

πŸ”’ Isolation and Security

Every job runs in isolation. A faulty playbook cannot damage the CI/CD server -- the container is simply deleted.

βͺ Easy Rollback

New Ansible update breaks something? Simply set the image tag back to the previous version. Immediately operational again.

πŸ’‘Practical Example: Without vs. With Docker
❌ Without Docker:

Monday: Pipeline runs with Ansible 2.15.3. Wednesday: Server update installs Ansible 2.16.0. Thursday: Pipeline breaks due to breaking changes in the new version. Nobody understands why -- "it was working just yesterday".

βœ… With Docker:

The image ansible-ndfc:1.0 always contains exactly Ansible 2.15.3. Regardless of server updates -- the container is not affected. Only when you consciously build a new image ansible-ndfc:1.1 does the environment change.

CI/CD Integration (GitLab CI)

πŸ“„ .gitlab-ci.yml
stages:
  - build
  - test
  - deploy

variables:
  IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA

build-image:
  stage: build
  image: docker:24
  services:
    - docker:24-dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $IMAGE_TAG .
    - docker push $IMAGE_TAG

run-ansible:
  stage: deploy
  image: $IMAGE_TAG
  script:
    - ansible-playbook -i inventory.yml playbook.yml
  only:
    - main

Do Not Forget .dockerignore

πŸ“„ .dockerignore
# Git
.git
.gitignore

# IDE
.vscode
.idea
*.swp

# Python
__pycache__
*.pyc
.pytest_cache
.venv
venv

# Secrets (NEVER include in the image!)
*.pem
*.key
.env
secrets/

# Build artifacts
*.log
.coverage
htmlcov/

❓ Quiz: Test Your Docker Knowledge (5 Min)

Try to answer the questions without scrolling back.

1. What is the main difference between a container and a VM?

Show Answer

A VM virtualizes the complete hardware including its own operating system (hypervisor level). A container shares the kernel of the host operating system and only isolates the process. This makes containers significantly lighter (MB instead of GB) and they start in seconds instead of minutes.

2. What is the difference between a Docker image and a container?

Show Answer

An image is a read-only template (like a blueprint). A container is a running instance of an image with an additional writable layer. Any number of containers can be started from a single image.

3. Why should you use Docker in CI/CD pipelines?

Show Answer

Docker ensures a defined state on every pipeline run. Every job starts with a fresh container -- same versions, same dependencies, no leftovers. What works locally also works in the pipeline.

4. What does the command docker run -v ${PWD}:/workspace python:3.11 bash do?

Show Answer

It starts a Python 3.11 container interactively with Bash and mounts the current directory ($PWD) as a Bind Mount under /workspace in the container. Changes to files are visible both in the container and on the host.

5. What is Docker Compose and when do you use it?

Show Answer

Docker Compose is a tool for defining and starting multi-container applicationswith a YAML file. Instead of running multiple docker run commands, you describe all services, networks, and volumes declaratively and start everything with docker compose up.

Summary

βœ… What You Have Learned

  • β˜‘οΈ Containers vs. VMs: Differences in architecture and application
  • β˜‘οΈ Docker architecture: Daemon, CLI, images, containers, registry
  • β˜‘οΈ Writing and optimizing Dockerfiles
  • β˜‘οΈ Using volumes for persistent data
  • β˜‘οΈ Docker Compose for multi-container setups
  • β˜‘οΈ Network tools (Ansible, Python, netmiko) in containers
  • β˜‘οΈ Troubleshooting: logs, exec, inspect
  • β˜‘οΈ Best practices for CI/CD integration

πŸ“‹ Cheat Sheet: Docker Commands

docker pull <image>
Download image
docker run -it <image>
Start interactively
docker run -d <image>
Run in background
docker run -v <src>:<dst>
Mount volume
docker run -p 8080:80
Forward port
docker ps
Running containers
docker ps -a
All containers
docker logs <name>
Show logs
docker exec -it <name> bash
Open shell
docker inspect <name>
Show details
docker stop <name>
Stop
docker rm <name>
Delete
docker images
Local images
docker build -t <name> .
Build image
docker compose up -d
Start Compose
docker compose down
Stop Compose
πŸ’‘Next Steps

In the next module, you will learn how containers are used in CI/CD pipelines. There we will build automated workflows that use Ansible containers to deploy network configurations.