diff --git a/projects/lab_setup/atl_home_network_2019.jpg b/projects/homelab/atl_home_network_2019.jpg similarity index 100% rename from projects/lab_setup/atl_home_network_2019.jpg rename to projects/homelab/atl_home_network_2019.jpg diff --git a/projects/homelab/homelab.md b/projects/homelab/homelab.md new file mode 100644 index 0000000..1213da3 --- /dev/null +++ b/projects/homelab/homelab.md @@ -0,0 +1,166 @@ +--- +title: Homelab Infrastructure +description: This document outlines the internal infrastructure of the my homelab. +author: wompmacho +date: 2026-03-27 +tags: +- homelab +- infrastructure +- networking +- virtualization +--- + +# Homelab Infrastructure + +This document outlines the internal infrastructure of the **wompmacho** homelab. The lab is built on a high-speed **10GbE backbone** and utilizes a hybrid architecture of dedicated NAS storage, Proxmox virtualization, and containerized services for media, self-hosting, and development. + +[TOC] + +## Physical and hardware registry + +### Compute and virtualization + +| Node Name | Hardware | OS | Primary Role | +| --------------------------------- | -------------------------------------------------------- | ---------------- | --------------------------------------- | +| **router** (`10.0.0.1`) | GMKtec M5 Plus, Ryzen 7 5825U, 32GB RAM, Dual NIC 2.5GbE | OPNsense 25.1 | Routing, Firewall, VPN | +| **truenas** (`10.0.0.2`) | Core i7-7700K, 32GB RAM, Broadcom SAS 3008 (SAS 9300-8i) | TrueNAS-25.04.1 | Primary Storage (10GbE), Media Apps | +| **laptop-proxmox** (`10.0.0.142`) | Ryzen 7 5800H, 64GB RAM, 1TB WD BLACK + 500GB SSD | Proxmox VE 8.4.1 | Virtualization Host (2.5GbE) | +| **game-pc** | Core i9-13900K, 64GB DDR5 6400, RTX 4080, Z790-Creator | Windows 11 | High-end Gaming / AI Inference (Ollama) | + +### Networking hardware + +* **Switch**: NICGIGA 8-Port 10G Unmanaged Switch (10GbE Base-T). +* **WiFi**: Linksys WiFi in bridge mode. +* **Modem**: Comcast gateway (Bridged mode). +* **Camera**: Amcrest IP Camera (WiFi) - IP: `10.0.0.194`. + +### Power and environment + +* **UPS**: CyberPower CP1500PFCLCD (1500VA/1000W, Sine Wave). +* **Smart Control**: TP-Link Tapo P115 Smart Plugs (15A/1800W Max). + +### Detailed hardware specifications + +#### Storage node (`truenas`) +* **CPU**: Intel Core i7-7700K @ 4.20GHz +* **Memory**: 32GB RAM +* **HBA Controller**: Broadcom SAS 3008 (SAS 9300-8i equivalent) PCIe 3.0 X8, 2x Mini SAS SFF-8643 +* **Cables**: Sonilco Mini SAS HDD SFF-8643 to 4 SFF-8482 with 15-pin Power Port Cord +* **Drives**: 10x Seagate Enterprise Capacity 3.5 HDD (ST6000NM0034), 6TB 7.2K RPM SAS 12Gb/s 128MB Cache + +#### Virtualization node (`laptop-proxmox`) +* **CPU**: AMD Ryzen 7 5800H (8 Cores, 16 Threads) +* **Memory**: 64GB Crucial RAM Kit (2x32GB) DDR4 3200MHz CL22 (CT2K32G4SFD832A) +* **Storage**: 1TB WD_BLACK NVMe SSD (VM disks), 500GB SSD (Boot disk) + +#### Router node (`router`) +* **Model**: GMKtec M5 Plus Gaming Mini PC +* **CPU**: AMD Ryzen 7 5825U with Radeon Graphics (8 cores, 16 threads) +* **Memory**: 32GB RAM +* **Storage**: 1TB SSD + +#### Workstation / Gaming (`game-pc`) +* **CPU**: Intel Core i9-13900K (24 cores: 8 P-cores + 16 E-cores) +* **Cooler**: Noctua NH-D15 chromax.Black Dual-Tower CPU Cooler +* **Motherboard**: ASUS ProArt Z790-Creator WiFi 6E LGA 1700 +* **Memory**: 64GB G.Skill Trident Z5 RGB Series (2 x 32GB) DDR5 6400 CL32-39-39-102 1.40V (F5-6400J3239G32GX2-TZ5RK) +* **GPU**: ZOTAC Gaming GeForce RTX 4080 16GB AMP Extreme AIRO (ZT-D40810B-10P) +* **Storage**: 1TB WD_BLACK SN770 NVMe Gaming SSD (WDS100T3X0E) +* **Power Supply**: Corsair RM1000x (2021) Fully Modular ATX 80 PLUS Gold + +#### Networking and power peripherals +* **Switch**: NICGIGA 8-Port 10G Ethernet Switch Unmanaged (8x 10Gb Base-T Ports) +* **UPS**: CyberPower CP1500PFCLCD PFC Sinewave UPS Battery Backup (1500VA/1000W) +* **Smart Plugs**: TP-Link Tapo P115 Smart Plug Wi-Fi Mini (15A/1800W Max) + +## Networking architecture + +### Logical structure + +* **LAN Subnet**: `10.0.0.0/16` +* **Default Gateway**: `10.0.0.1` (OPNsense) +* **Primary DNS**: `10.0.0.11` (Pi-hole) + +### VPN + +* **Tunnel Subnet**: `10.10.10.0/24` +* **Phone Peer**: `10.10.10.3/32` + +## Storage infrastructure + +### Pool configuration + +* **Topology**: 1 x RAIDZ2 | 10-wide | 6TB SAS Drives. +* **Drives**: Seagate Enterprise Capacity ST6000NM0034 (6TB 7.2K RPM SAS 12Gb/s). +* **HBA**: Broadcom SAS 3008 (SAS 9300-8i equivalent) with Mini SAS SFF-8643 to 4 SFF-8482 cables. +* **Capacity**: ~37.27 TiB Usable. + +## Virtualization cluster + +The Proxmox virtualization host (`laptop-proxmox`) is an entirely separate physical node from the TrueNAS storage server. They communicate with each other primarily over the 10GbE backbone switch. + +### Proxmox node (`laptop-proxmox` - `10.0.0.142`) + +| ID | Type | Hostname | IP | Role | +| --- | ---- | --------------- | ---------- | ------------------------------- | +| - | LXC | **pihole** | 10.0.0.11 | DNS Sinkhole / Local DNS | +| - | VM | **docker** | 10.0.0.190 | Main Docker Host (Ubuntu 24.04) | +| - | VM | **pterodactyl** | 10.0.0.110 | Game Server Panel (Debian) | + +## Docker services + +These services run on the main Docker Host VM (`10.0.0.190`) and are proxied via Nginx Proxy Manager (SSL via Cloudflare). + +| Container Name | Mapped Ports | Access | Description / Role | +| ----------------------- | ---------------------- | ------------ | ------------------------------------------------------------------------- | +| **nginx-proxy-manager** | 80, 81, 443 | Internal/VPN | Reverse proxy for all internal and external domains | +| **portainer** | 8000, 9000, 9001, 9443 | Internal/VPN | Docker container management GUI | +| **cloudflare-ddns** | - | Internal/VPN | Automatically updates dynamic IP to Cloudflare DNS | +| **immich_server** | 2283 | Public | Photo/Video backup and gallery (`immich.wompmacho.com`) | +| **immich_postgres** | 5432 (Internal) | Internal/VPN | Database for Immich | +| **immich_redis** | 6379 (Internal) | Internal/VPN | Cache for Immich | +| **vaultwarden** | 9998, 9999 | Public | Self-hosted Bitwarden password manager (`vaultwarden.wompmacho.com`) | +| **gitea** | 222, 3001 | Public | Internal Git repository host (`git.wompmacho.com`) | +| **gitea-db-1** | 5432 (Internal) | Internal/VPN | PostgreSQL Database for Gitea | +| **gitea_runner** | - | Internal/VPN | CI/CD Action Runner for Gitea pipelines | +| **frigate** | 5000, 8554, 8555, 8971 | Public | AI NVR actively recording the Amcrest IP camera (`frigate.wompmacho.com`) | +| **homepage** | 7676 | Internal/VPN | Dashboard for navigation (`http://homepage/`) | +| **docs-public** | 9895 | Public | Nginx serving public Hugo documentation (`wiki.wompmacho.com`) | +| **docs-private** | 9897 | Internal/VPN | Nginx serving private Hugo documentation (`private`) | +| **paperless-ngx** | 3003 | Internal/VPN | Document management system (`http://paperless/`) | +| **sure** | 3006 | Internal/VPN | Self-hosted shared finance tracking application (`http://sure/`) | +| **audiobookshelf** | 13378 | Public | Audiobook and podcast server (`audiobookshelf.wompmacho.com`) | +| **webtop** | 7978, 7979 | Public | Browser-based desktop environment (`webtop.wompmacho.com`) | +| **open-webui** | 3007 | Internal/VPN | ChatGPT-like web interface connected to Ollama LLMs (`http://gemma/`) | +| **linkstack** | 80, 8190 | Public | Personal link landing page | +| **torrent** | 8181, 8999 | Internal/VPN | Internal/VPN (`http://torrent/`) | +| **dozzle** | 4343 | Internal/VPN | Internal/VPN (`http://dozzle/`) | + +## Media stack + +These services are hosted on the TrueNAS node (`truenas`) and proxied via the Docker VM (`10.0.0.190`). + +| Service | Upstream Port | Description | +| -------------- | ------------- | ------------------------------------------ | +| **Sonarr** | 30027 | TV Show Management | +| **Radarr** | 30025 | Movie Management | +| **Lidarr** | 30014 | Music Management | +| **Readarr** | 30045 | Book Management | +| **Prowlarr** | 30050 | Indexer Management | +| **Bazarr** | 30046 | Subtitle Management | +| **Jellyfin** | 30013 | Media Streaming Server | +| **Jellyseerr** | 30042 | Media Requests (`jellyseer.wompmacho.com`) | + +## Self-Hosted AI Infrastructure + +The lab includes a distributed self-hosted AI architecture utilizing the high-speed local network: + +* **Compute Backend**: The **game-pc** (`10.0.0.109`) runs **Ollama**, utilizing the RTX 4080 GPU to serve large language models (e.g., `gemma4:26b`, `gemma4:e4b`) over port `11434`. +* **Web Interface**: The **open-webui** container runs on the Docker VM (`10.0.0.190`), providing a ChatGPT-like RAG interface for general use, mapping `/srv/open-webui` for persistent chat and vector databases. +* **Developer Integration**: VS Code instances (like `code-server` running directly on the Proxmox host) utilize the **Continue.dev** extension configured with MCP (Model Context Protocol) to execute autonomous terminal commands via the remote Ollama models. + +## Security and maintenance + +* **SSL/TLS**: Managed via Nginx Proxy Manager with Cloudflare DNS challenge. +* **Firewall**: OPNsense handles all inter-VLAN and external routing. +* **Monitoring**: Portainer for container health; UPS for power stability. \ No newline at end of file diff --git a/projects/lab_setup/index.md b/projects/homelab/homelab_diagrams.md similarity index 96% rename from projects/lab_setup/index.md rename to projects/homelab/homelab_diagrams.md index 520b41e..14f96b2 100644 --- a/projects/lab_setup/index.md +++ b/projects/homelab/homelab_diagrams.md @@ -1,5 +1,5 @@ --- -title: Lab Setup +title: Lab Diagrams description: My Home Network Overview showHero: false showEditURL: true diff --git a/projects/lab_setup/nc_home_network_diagram_white_background_2020.jpg b/projects/homelab/nc_home_network_diagram_white_background_2020.jpg similarity index 100% rename from projects/lab_setup/nc_home_network_diagram_white_background_2020.jpg rename to projects/homelab/nc_home_network_diagram_white_background_2020.jpg diff --git a/projects/lab_setup/sc_apartment_network_diagram_2024.png b/projects/homelab/sc_apartment_network_diagram_2024.png similarity index 100% rename from projects/lab_setup/sc_apartment_network_diagram_2024.png rename to projects/homelab/sc_apartment_network_diagram_2024.png diff --git a/projects/lab_setup/sc_apartment_network_diagram_nov_2023.png b/projects/homelab/sc_apartment_network_diagram_nov_2023.png similarity index 100% rename from projects/lab_setup/sc_apartment_network_diagram_nov_2023.png rename to projects/homelab/sc_apartment_network_diagram_nov_2023.png diff --git a/projects/proxmox/index.md b/projects/proxmox/index.md new file mode 100644 index 0000000..7c9029d --- /dev/null +++ b/projects/proxmox/index.md @@ -0,0 +1,96 @@ +--- +title: Proxmox +description: Bare-metal Hypervisor Virtualization Platform +showHero: false +author: wompmacho +date: '2026-04-11' +lastmod: '2026-04-11' +tags: ['virtualization', 'self-hosted', 'linux', 'kvm', 'lxc'] +--- + +## What is Proxmox VE? + +Proxmox Virtual Environment (VE) is a powerful, open-source Bare-metal Hypervisor / virtualization management platform. It integrates two virtualization technologies—Kernel-based Virtual Machine (KVM) for virtual machines and Linux Containers (LXC) for lightweight container-based virtualization—into a single, easy-to-manage solution with a web-based interface. + +### Pros and Cons + +#### Pros + +- **Open-Source and Free:** Proxmox VE is completely free to download and use, making it a cost-effective solution for both home labs and enterprise environments. +- **Integrated Solution:** It combines KVM and LXC, offering the flexibility to run full virtual machines or lightweight containers on the same host. +- **Web-Based Management:** The intuitive web interface allows for easy management of VMs, containers, storage, and networking without needing to use the command line for most tasks. +- **Rich Feature Set:** It includes enterprise-grade features like high availability (HA) clustering, live migration, software-defined storage (like Ceph and ZFS), and robust backup/restore capabilities out of the box. + +#### Cons + +- **Learning Curve:** For beginners, the initial setup and understanding of its advanced networking and storage options can have a steeper learning curve compared to some commercial alternatives. +- **Community-Based Support:** While the community support is strong, professional, enterprise-level support requires a paid subscription. +- **Hardware Compatibility:** While it supports a wide range of hardware, specific or very new components might lack immediate driver support. + + +--- + +## When choosing CPU for VM... + +Currently I am running `Proxmox > Ubuntu VM > code-server` for a nice CitC (client in the cloud) like interface I can use to access my projects / documentation and code from anywhere. Had some issues when I wanted to integrate gemini code assistant extension into my code-server instance. + +*Turns out*: in version v2.56 they switched over to a more moderen cpu instruction set / in order to optimize for moderen ai-assisted workflows. Older default cpu architextures (*like the one used by my ubuntu vm at the time*) are missing some newer instruction sets that are required for the extension to run. + +> [!NOTE] I struggled with this for a few weeks until I finally looked a bit deeper / aided by gemini `xD` + +This is generally only a problem when you are creating a vm for the first time, so when making a vm; consider its uses and if it would make sense to not use the default options like I did. Luckily; they are interchangable on proxmox vms and do not require any sort of reinstalation like you would on a normal OS. + +### Gemini Code Assist on a VM + +The "The Gemini Code Assist server crashed 5 times" error can occur when using code-server or VS Code. This crash, identified by the SIGILL (Illegal Instruction) signal, is usually due to a hardware mismatch. + +#### The Problem + + This is caused by a modern Instructions set on "Generic" Hardware. Starting with version 2.56, the Gemini Code Assist server binary needs the `AVX` (Advanced Vector Extensions) instruction set. Proxmox often sets a VM's CPU type to kvm64. This hides these instructions for compatibility. When the extension tries to run an AVX command on a CPU that doesn't "have" it, the process crashes. + +#### The Solution + +- [ ] Step 1: Diagnosing the CPU: + + A command can check for the required instruction flags. In the Ubuntu terminal, run: + + ```sh + lscpu | grep -i 'avx\|aes\|pclmul' + ``` + + The Result: If the output shows aes, but avx and pclmul are missing, the virtual processor is too "basic". + +- [ ] Step 2: The Immediate Fix (Downgrade) + + To fix this, roll back to a version before these requirements were enforced: + + 1. Go to the Extensions tab in VS Code. + 2. Click the gear icon for Gemini Code Assist and select "Install Another Version...". + 3. Choose v2.55.x or earlier. + 4. Important: Uncheck "Auto Update" to prevent it from breaking again. + +- [ ] Step 3: The Long-Term Fix (Proxmox CPU Passthrough) + + The best fix is to expose the physical CPU's features to the VM. No reinstallation is required. + How to change to "Host" Processor in Proxmox: + + 1. Shut down the Ubuntu VM. + 2. Log into your Proxmox Web UI. + 3. Select the VM > Hardware > Processors. + 4. Double-click Type and change it from Default (kvm64) to host. + 5. Restart the VM. + +### Why use "Host"? + +Setting the type to host passes the physical CPU's features—including AVX—directly to your Ubuntu instance. This fixes the Gemini crash and can improve performance. + +- In host mode, the VM executes code directly on the physical hardware. +- This results in lower CPU latency and better performance in high-demand applications like databases, compilation (GCC/Clang), and web servers. +- Modern software often needs specific "shortcuts" in modern CPUs. + - AVX/AVX2/AVX-512: These are important for math-heavy tasks and AI. + - AES-NI: Speeds up encryption, which makes SSH, VPNs, and HTTPS faster. + - PCLMULQDQ: Speeds up data integrity checks and modern security protocols. +- The Linux scheduler can more intelligently place tasks on the right cores. +- The VM can sometimes access hardware-level performance counters, which is vital if you are doing any low-level debugging or performance profiling. +- If you want to run Docker with specialized isolation or even run a VM inside a VM (Nested Virtualization), host mode is usually the most stable way to pass through the necessary "VMX" (Intel) or "SVM" (AMD) flags. +- Modern CPUs have hardware-level protections (like Execute Disable Bit or SMEP/SMAP) that protect against memory injection attacks. Generic CPU models often disable these to ensure the VM can boot on any old server; host mode enables them fully.