Files
wiki/index.json
2026-03-12 05:34:29 +00:00

1 line
74 KiB
JSON
Raw Permalink Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
[{"content":"","date":"8 March 2026","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":" Liberal is bad. # Wait, wasn\u0026rsquo;t this group touting to be \u0026ldquo;liberal\u0026rdquo; once? Tell me more about why Liberal is bad. You know what… that person looks funny and bothers me and calls themself a liberal so I dont wanna be in that catagory anymore. No wait, actually lets just demonize a group because fox news says so. NO WAIT, let\u0026rsquo;s not use words correctly because words, logic, facts\u0026hellip; These things don\u0026rsquo;t matter.\nquit being a fucking sheep # this sums up my thoughts on the matter\nYour browser does not support the video tag. ","date":"8 March 2026","externalUrl":null,"permalink":"/posts/liberal/","section":"Posts","summary":"Liberal is bad. # Wait, wasnt this group touting to be “liberal” once? Tell me more about why Liberal is bad. You know what… that person looks funny and bothers me and calls themself a liberal so I dont wanna be in that catagory anymore. No wait, actually lets just demonize a group because fox news says so. NO WAIT, lets not use words correctly because words, logic, facts… These things dont matter.\n","title":"Liberal","type":"posts"},{"content":"","date":"8 March 2026","externalUrl":null,"permalink":"/authors/wompmacho/","section":"Authors","summary":"","title":"Wompmacho","type":"authors"},{"content":"","date":"31 December 2025","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":" What is vaultwarden? # vaultwarden ia a alternative server implementation of the Bitwarden Client API, written in Rust and compatible with official Bitwarden clients, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.\nDocker Compose Example # # vaultwarden -- https://github.com/dani-garcia/vaultwarden --- services: vaultwarden: image: vaultwarden/server:latest container_name: vaultwarden restart: unless-stopped environment: DOMAIN: \u0026#34;https://vaultwarden.wompmacho.com\u0026#34; ROCKET_PORT: 80 ROCKET_ENV: production volumes: - vaultwarden-mount:/data/ ports: - \u0026#39;9998:80\u0026#39; - \u0026#39;9999:443\u0026#39; volumes: vaultwarden-mount: name: vaultwarden-mount driver_opts: type: nfs o: addr=truenas,nolock,soft,rw device: :/mnt/store/vault/app/vaultwarden ","date":"23 February 2025","externalUrl":null,"permalink":"/projects/vaultwarden/","section":"Projects","summary":"What is vaultwarden? # vaultwarden ia a alternative server implementation of the Bitwarden Client API, written in Rust and compatible with official Bitwarden clients, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.\n","title":"","type":"projects"},{"content":"","date":"23 February 2025","externalUrl":null,"permalink":"/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":"The Hidden Flaw of Performance Reviews.\nGoodhart\u0026rsquo;s law Goodhart\u0026rsquo;s law is an adage often stated as, \u0026ldquo;When a measure becomes a target, it ceases to be a good measure\u0026rdquo;. It is named after British economist Charles Goodhart, who is credited with expressing the core idea of the adage in a 1975 article on monetary policy in the United Kingdom:\nAny observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.\nIt was used to criticize the British Thatcher government for trying to conduct monetary policy on the basis of targets for broad and narrow money, but the law reflects a much more general phenomenon.\nIn other words, when we use a measure to reward performance, we provide an incentive to manipulate the measure in order to receive the reward. This can sometimes result in actions that actually reduce the effectiveness of the measured system while paradoxically improving the measurement of system performance.\n","date":"8 February 2025","externalUrl":null,"permalink":"/posts/performance_reviews/","section":"Posts","summary":"The Hidden Flaw of Performance Reviews.\n","title":"Performance Reviews","type":"posts"},{"content":" What is Audiobookshelf? # Audiobookshelf is a self-hosted, open-source server designed to manage and stream your personal audiobook and podcast collections. It acts as a private, web-based alternative to services like Audible, giving you full control over your media files. Key features include:\nDocker Compose Example # Audiobookshelf requires a websocket connection. # audiobookshelf - https://github.com/advplyr/audiobookshelf/blob/master/docker-compose.yml --- version: \u0026#34;3.7\u0026#34; services: audiobookshelf: container_name: audiobookshelf image: ghcr.io/advplyr/audiobookshelf:latest # ABS runs on port 13378 by default. If you want to change # the port, only change the external port, not the internal port ports: - 13378:80 volumes: # These volumes are needed to keep your library persistent # and allow media to be accessed by the ABS server. # The path to the left of the colon is the path on your computer, # and the path to the right of the colon is where the data is # available to ABS in Docker. # You can change these media directories or add as many as you want - /mnt/store/MediaServer/Audio_Books:/audiobooks - /mnt/store/MediaServer/podcasts:/podcasts - /mnt/store/app/audiobookshelf/metadata:/metadata # The config directory needs to be on the same physical machine # you are running ABS on - /app/audiobookshelf/config:/config restart: unless-stopped # You can use the following environment variable to run the ABS # docker container as a specific user. You will need to change # the UID and GID to the correct values for your user. #environment: # - user=1000:1000 ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/audiobookshelf/","section":"Projects","summary":"What is Audiobookshelf? # Audiobookshelf is a self-hosted, open-source server designed to manage and stream your personal audiobook and podcast collections. It acts as a private, web-based alternative to services like Audible, giving you full control over your media files. Key features include:\n","title":"audiobookshelf","type":"projects"},{"content":" Whats is code-server? # code-server is a self-hosted instance of Visual Studio Code that runs on a remote server and is accessible directly through your web browser. It effectively turns any machine with a CPU and RAM into a fully functional cloud-based development environment.\nDocker Compose Example # # code-server -- https://hub.docker.com/r/linuxserver/code-server --- services: code-server: image: lscr.io/linuxserver/code-server:latest container_name: code-server environment: - PUID=1000 - PGID=1000 - TZ=America/New_York - PASSWORD=password #optional - HASHED_PASSWORD= #optional - SUDO_PASSWORD=password #optional - SUDO_PASSWORD_HASH= #optional - PROXY_DOMAIN=code-server.domain.com #optional - DEFAULT_WORKSPACE=/apps #optional volumes: - code-server-nfs:/config - apps:/apps ports: - 8443:8443 restart: unless-stopped volumes: code-server-nfs: name: code-server-nfs driver_opts: type: nfs o: addr=truenas,nolock,soft,rw device: :/mnt/store/vault/app/code-server apps: name: apps driver_opts: type: nfs o: addr=truenas,nolock,soft,rw device: :/mnt/store/vault/app/ ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/code-server/code-server/","section":"Projects","summary":"Whats is code-server? # code-server is a self-hosted instance of Visual Studio Code that runs on a remote server and is accessible directly through your web browser. It effectively turns any machine with a CPU and RAM into a fully functional cloud-based development environment.\n","title":"code-server","type":"projects"},{"content":" What is Frigate? # Frigate is a complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.\n# frigate - https://docs.frigate.video/frigate/installation/ --- version: \u0026#34;3.9\u0026#34; services: frigate: container_name: frigate privileged: true # this may not be necessary for all setups restart: unless-stopped image: ghcr.io/blakeblackshear/frigate:stable shm_size: \u0026#34;64mb\u0026#34; # update for your cameras based on calculation above volumes: - /etc/localtime:/etc/localtime:ro - /app/frigate/config:/config - /mnt/store/app/frigate:/media/frigate - type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear target: /tmp/cache tmpfs: size: 1000000000 ports: - \u0026#34;5000:5000\u0026#34; - \u0026#34;8554:8554\u0026#34; # RTSP feeds - \u0026#34;8555:8555/tcp\u0026#34; # WebRTC over tcp - \u0026#34;8555:8555/udp\u0026#34; # WebRTC over udp environment: FRIGATE_RTSP_PASSWORD: \u0026#34;password\u0026#34; requires to set a config.yml in the /config volume.\nMy current config # Am not currently running the optimized setup for this, but testing things out.\nmqtt: enabled: false cameras: front: birdseye: order: 1 ffmpeg: inputs: - path: rtsp://USERNAME:PASSWORD@IPADDR:554/path_to_stream roles: - detect - record objects: track: - person detect: width: 1920 height: 1080 record: sync_recordings: True enabled: True retain: days: 7 mode: motion events: # Optional: Number of seconds before the event to include (default: shown below) pre_capture: 5 # Optional: Number of seconds after the event to include (default: shown below) post_capture: 5 detectors: cpu1: type: cpu num_threads: 3 # Include all cameras by default in Birdseye view birdseye: enabled: True mode: continuous width: 1280 height: 720 quality: 8 inactivity_threshold: 30 Proxy fixes # For nginx proxy - add this to advanced options for proxy host\nproxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; proxy_buffering off; proxy_request_buffering off; ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/frigate/","section":"Projects","summary":"What is Frigate? # Frigate is a complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.\n","title":"frigate","type":"projects"},{"content":" What is homarr? # homarr is a nice little dashboard app that can be used to organize your homelab with a simple webpage interface. Great for Quick links, updates on weather, time, seeing if a server is down, monitoring your webcams or torrents, etc.\nNote Personally I have moved on to homepage. Looks a little nicer in my opinion. Not the biggest fan of homarr interface, Though I may try again after some updates.\nDocker Compose Example # Note For docker support, extend a volume to the docker.sock\n/var/run/docker.sock:/var/run/docker.sock # homarr - docker compose --- version: \u0026#39;3\u0026#39; services: homarr: container_name: homarr image: ghcr.io/ajnart/homarr:latest restart: unless-stopped volumes: - /app/homarr/configs:/app/data/configs - /app/homarr/icons:/app/public/icons - /var/run/docker.sock:/var/run/docker.sock ports: - \u0026#39;7575:7575\u0026#39; ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/homarr/","section":"Projects","summary":"What is homarr? # homarr is a nice little dashboard app that can be used to organize your homelab with a simple webpage interface. Great for Quick links, updates on weather, time, seeing if a server is down, monitoring your webcams or torrents, etc.\n","title":"homarr","type":"projects"},{"content":" What is homepage? # homepage is an open-source, highly customizable, and static site-based dashboard designed to organize your self-hosted services into a beautiful, central hub.\nUnlike other dashboards that require complex databases or heavy backend services, homepage runs as a lightweight, Docker-based container that reads a single configuration file (YAML).\nDocker Compose Example # # homepage - docker compose --- services: homepage: container_name: homepage image: ghcr.io/gethomepage/homepage:latest restart: unless-stopped volumes: - /mnt/store/app/homepage/configs:/app/config # Make sure your local config directory exists - /var/run/docker.sock:/var/run/docker.sock # (optional) For docker integrations ports: - 7676:3000 ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/homepage/","section":"Projects","summary":"What is homepage? # homepage is an open-source, highly customizable, and static site-based dashboard designed to organize your self-hosted services into a beautiful, central hub.\n","title":"homepage","type":"projects"},{"content":" What is jellyfin? # Jellyfin is a media server. I like it because its simple, free, doesnt require online accounts and lets you serve up your movies, tv shows and music. Is very similar to apps like Plex and Emby. You can manage your media and auto download things like episode names, artwork etc. Has plugin support and is basically trying to be a better open source version of Plex. It has apps/support for android, google tvs, firestick, iphone etc.\nDocker Compose Example # # Jellyfin - docker compose --- services: jellyfin: container_name: jellyfin image: lscr.io/linuxserver/jellyfin:latest environment: - PUID=0 - PGID=0 - TZ=America/New_York ports: - 8096:8096 - 8920:8920 #optional https - 7359:7359/udp #optional discovery - 1900:1900/udp #optional discovery volumes: - /app/jellyfin:/config\t# config for your jellyfin - /mnt/store/:/data/store # where your media lives (movies/tv etc.) restart: unless-stopped Note Recomend storing the metadata \u0026amp; cache on NAS and not on the OS docker host. The files start to get LARGE for jellyfin due to mass amount of metadata stored for media. Set this under the jellyfin general settings after jellyfin is running. {.is-warning}\nOnce you server is running, head over to the to your opened port (docker_container_ip:8096) to start the setup proccess. When adding libraries - select the content type, set the display name and then click the FOLDERS + option. This is where you will select the path to your media that you set up in the volumes.\n","date":"4 February 2025","externalUrl":null,"permalink":"/projects/jellyfin/","section":"Projects","summary":"What is jellyfin? # Jellyfin is a media server. I like it because its simple, free, doesnt require online accounts and lets you serve up your movies, tv shows and music. Is very similar to apps like Plex and Emby. You can manage your media and auto download things like episode names, artwork etc. Has plugin support and is basically trying to be a better open source version of Plex. It has apps/support for android, google tvs, firestick, iphone etc.\n","title":"jellyfin","type":"projects"},{"content":" What is jellyseer? # Jellyseerr is a free, open-source, and highly intuitive media request management tool designed for the Jellyfin (and Plex/Emby) ecosystem. It essentially acts as a \u0026ldquo;gateway\u0026rdquo; between your users and your media server.\nDocker Compose Example # # jellyseerr - docker compose --- services: jellyseerr: image: fallenbagel/jellyseerr:latest container_name: jellyseerr environment: - LOG_LEVEL=debug - TZ=America/New_York ports: - 5055:5055 volumes: - /mnt/store/app/jellyseerr/config:/app/config restart: unless-stopped ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/jellyseer/","section":"Projects","summary":"What is jellyseer? # Jellyseerr is a free, open-source, and highly intuitive media request management tool designed for the Jellyfin (and Plex/Emby) ecosystem. It essentially acts as a “gateway” between your users and your media server.\n","title":"jellyseer","type":"projects"},{"content":" What is linkstacks? # Linkstacks is a nice little linktr.ee clone that allows you to set up a simple link page. It can also be expanded to add multiple users and you can host multiple people\u0026rsquo;s pages with their own user accounts and everything.\nDocker Compose Example # # Linkstacks - docker compose version: \u0026#34;3.8\u0026#34; --- services: linkstack: container_name: \u0026#39;linkstack\u0026#39; hostname: \u0026#39;linkstack\u0026#39; image: \u0026#39;linkstackorg/linkstack:latest\u0026#39; user: \u0026#39;0:0\u0026#39; environment: TZ: \u0026#39;America/New_york\u0026#39; SERVER_ADMIN: \u0026#39;SERVER_ADMIN_EMAIL\u0026#39; HTTP_SERVER_NAME: \u0026#39;HTTP_DOMAIN_NAME\u0026#39; HTTPS_SERVER_NAME: \u0026#39;HTTPS_DOMAIN_NAME\u0026#39; LOG_LEVEL: \u0026#39;info\u0026#39; PHP_MEMORY_LIMIT: \u0026#39;256M\u0026#39; UPLOAD_MAX_FILESIZE: \u0026#39;8M\u0026#39; volumes: - \u0026#39;linkstack_data:/htdocs\u0026#39; #- \u0026#39;/app/linkstack/:/htdocs\u0026#39; ports: - \u0026#39;8190:443\u0026#39; restart: unless-stopped volumes: linkstack_data: ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/linkstack/","section":"Projects","summary":"What is linkstacks? # Linkstacks is a nice little linktr.ee clone that allows you to set up a simple link page. It can also be expanded to add multiple users and you can host multiple peoples pages with their own user accounts and everything.\n","title":"linkstacks","type":"projects"},{"content":" What is mkdocs? # MkDocs is a fast, simple, and extensible static site generator geared specifically toward building project documentation. It relies heavily on Markdown files, which makes it incredibly accessible for developers who want to write docs as easily as they write code.\nDocker Compose Example # Running mkdocs with material theme and plugins built in.\nInfo There is some setup of folders and things that are not automatic so wont work straight out of the box.\n# mkdocs -- https://squidfunk.github.io/mkdocs-material/ version: \u0026#39;3\u0026#39; services: mkdocs: container_name: \u0026#39;mkdocs\u0026#39; restart: unless-stopped image: squidfunk/mkdocs-material environment: - PUID=1000 - PGID=1000 volumes: #- /mnt/store/app/mkdocs/:/docs - docs_nfs:/docs stdin_open: true tty: true ports: - \u0026#34;9896:8000\u0026#34; volumes: docs_nfs: name: docs_nfs driver_opts: type: nfs o: addr=truenas,nolock,soft,ro device: :/mnt/store/vault/app/mkdocs ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/mkdocs/","section":"Projects","summary":"What is mkdocs? # MkDocs is a fast, simple, and extensible static site generator geared specifically toward building project documentation. It relies heavily on Markdown files, which makes it incredibly accessible for developers who want to write docs as easily as they write code.\n","title":"mkdocs","type":"projects"},{"content":" What is nginx-proxy-manager? # Nginx-proxy-manager is a simplified GUI for handling an nginx server configuration. Nginx is a reverse proxy server.\nA reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. Nginx is a very common go-to. Nginx-proxy-manager is a nice gui version that has some built in tools, like handling SSL Certificates with Let\u0026rsquo;s Encrypt. Nginix can provide load balancing, Web acceleration, Security and anonymity for servers.\nPersonally I use nginx to proxy all my traffic to my dedicated servers so that I do not have to expose local hosts via port forwarding. This also allows me to do some extra encryption along the way and add additional security via access lists where I see fit. I can also reuse ports, which saves a lot of time for configurations.\nDocker Compose Example # # nginx-proxy-manager - docker compose --- version: \u0026#34;3.8\u0026#34; services: app: container_name: nginx-proxy-manager image: \u0026#34;jc21/nginx-proxy-manager:latest\u0026#34; restart: unless-stopped ports: - \u0026#34;80:80\u0026#34; - \u0026#34;81:81\u0026#34; - \u0026#34;443:443\u0026#34; volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt Nginx gives you that great routing to your internal networked servers. Also helps you set up your DNS both inside and outside the network. Can be a little confusing at first.\nFirst you will need a route you want to point to. In this case I will use this site.\nI want to be able to type in wiki.wompmacho.com and arrive here rather than remembering my IP address and having to set up multiple complicated port forwards in my router. Instead nginx allows me to open one port - 80 and direct all traffic to nginx. Then based on some rules I have set up I can point that traffic to my internal server. For this I will need a domain name server (DNS) to point to my external IP.\nDNS # A domain name server (DNS) allows me to make a name that can be easily looked up and point traffic to an Internet Protocol (IP) address that a server can easily understand.\nI can type in a name to a browser - my browser will ask my computers dns cache where is google.com and when not found - it will ask my router where is google.com which will then ask the dns server it is pointed to (typically your ISP) who then asks the dns server the ISP is pointing to\u0026hellip; and so on until eventually one of the DNS servers contain the information about google.com. Then it can retrieve the IP address of that server and send that information back down the line - adding it to its own cache as it goes so that it does not have to keep looking up this information. This will allow the browser to make requests to that server directly.\nIn order to make my dns name known so that people can find it on the internet easily, We will have to purchase the name from a host of an Authoritative DNS server. An Authoritative DNS server will not cache the info, but instead act as a primary resource of the configuration for a dns name so other dns servers can ask for that resource.\nIn this example I have purchased wompmacho.com from cloudflare who operate as a registrar and facilitates purchasing that name from a higher authoritative registry. Allowing me to point my external IP address to this address.\nOnce I have a DNS name I can use my registar (cloudflare) to point that name to my external IP address (my router\u0026rsquo;s IP address).\nwompmacho.com \u0026lt;\u0026gt; 175.222.222.222 Port forwarding # This traffic will then be requested from my router which should be typically set up to block incoming requests. In order to allow a request to my server hosting my site I will need to open a port (80) and allow traffic through my router\u0026rsquo;s firewall to my docker container that is hosting nginx-proxy-manager. Nginx will then redirect this again to my docker container for my site.\nA records # For my scenario my dns name is wompmacho.com but if I want to have multiple sites at my IP address I will need to be able to differentiate them. To do this I will use an A record. This allows me to split up my domain with multiple sub domains.\nwiki.wompmacho.com subdomain.wompmacho.com Setting up a proxy # This will point traffic to the same domain (wompmacho.com) but based on the sub-domain nginx will be able to direct and load balance traffic to my internal server hosting the wiki - in this case also my docker container. The wiki is hosted on a different port. We can point this proxy to that port.\nCloudflare DNS Proxy # An example of a dns service is Cloudflare. I switched over to cloudflare when google sold their awesome DNS. I have been loving it since the switch, there is a lot of info out there on services they offer and how to set things up. The biggest reason I switched over to cloudflare is their dns proxy. This allows my home IP to be proxied behind cloudflare services - and helps hide my servers location. This also allows me to utilize their services to block things like botnet attacks.\nSSL encryption # Secure Sockets Layer (SSL) is a security protocol that provides privacy, authentication, and integrity to Internet communications. SSL eventually evolved into Transport Layer Security (TLS). Using Nginx-proxy-manager we can connect our cloudflare DNS to our nginx server using SSL encryption. This is what that lock and https indicates on your browser - you are using a secured and verified connection to the server. This helps stop man in the middle attacks preventing people from spoofing the connection and spying on you.\nWe do this by adding a cloudflare certificate to nginx proxy manager and then setting up our proxy host to use this certificate on the SSL tab.\nNote this is is only for a secure connection between nginx \u0026lt;-\u0026gt; cloudflare The details page is referring to your internal setup - or where nginx should point the dns to.\norigin server \u0026lt;-\u0026gt; nginx Use https here only if you have ssl setup on your origin server and your server is set up to accept https, otherwise you make get bad gateway 502 errors.\n","date":"4 February 2025","externalUrl":null,"permalink":"/projects/nginx-proxy-manager/","section":"Projects","summary":"What is nginx-proxy-manager? # Nginx-proxy-manager is a simplified GUI for handling an nginx server configuration. Nginx is a reverse proxy server.\n","title":"nginx-proxy-manager","type":"projects"},{"content":" What is Portainer? # Portainer is a lightweight, powerful container management platform that provides a graphical user interface (GUI) to manage your Docker, Docker Swarm, and Kubernetes environments. It essentially sits on top of your container runtime, allowing you to control complex infrastructure without needing to master the command line.\nPortainer provides a very easy to understand user interface for deploying other docker containers. The great thing is, Portainer is a container itself, so it should run automatically following setup and allow you a nice gui interface for your docker environment via a web browser. I particularly love its dashboard because you get a great snapshot of your running containers, can easily restart and monitor your containers, but most importantly edit and deploy docker-compose files via the \u0026ldquo;stacks\u0026rdquo; page.\nPortainer CE is free version\nInstall Guide\nCreate the volume that Portainer Server will use to store its database\ndocker volume create portainer_data Download and install the Portainer Server container\ndocker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest Verify the container is running with docker ps\nroot@server:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES de5b28eb2fa9 portainer/portainer-ce:latest \u0026#34;/portainer\u0026#34; 2 weeks ago Up 9 days 0.0.0.0:8000-\u0026gt;8000/tcp, :::8000-\u0026gt;8000/tcp, 0.0.0.0:9443-\u0026gt;9443/tcp, :::9443-\u0026gt;9443/tcp portainer Navigate to https://HOST_IP_ADDRESS:9443 and create a user so you can log in to the Portainer web interface.\n","date":"4 February 2025","externalUrl":null,"permalink":"/projects/portainer/","section":"Projects","summary":"What is Portainer? # Portainer is a lightweight, powerful container management platform that provides a graphical user interface (GUI) to manage your Docker, Docker Swarm, and Kubernetes environments. It essentially sits on top of your container runtime, allowing you to control complex infrastructure without needing to master the command line.\n","title":"portainer","type":"projects"},{"content":" What is qBittorrent? # qBittorrent is a awesome, simple app that allows you to use classic QBittorrent in a docker container. I use QBittorrent because I can set it up with a VPN that only connects via the container directly. Effectively separating it from the rest of my network and allowing me to continue as normal while it is downloading. It will automatically stop the network if the VPN is not functioning correctly.\nDocker Compose Example # # qbittorrentvpn - docker compose # https://hub.docker.com/r/dyonr/qbittorrentvpn --- version: \u0026#34;2\u0026#34; services : qbittorrentvpn: container_name: qbittorrentvpn privileged: true image: dyonr/qbittorrentvpn environment : - VPN_ENABLED=true - VPN_USERNAME=VPN_USERNAME - VPN_PASSWORD=VPN_PASSWORD - LAN_NETWORK=10.0.0.0/24 - WEBUI_PORT_ENV=8080 - INCOMING_PORT_ENV=8999 ports : - 8080:8080 - 8999:8999 - 8999:8999/udp\tvolumes : - /app/QBittorrent/config:/config - /mnt/store/MediaServer/torrent:/downloads\trestart: unless-stopped To set up the VPN you will need to have an existing account with a VPN service. Username \u0026amp; Password for the vpn will be provided as a key by your vpn service. In my case I use Surfshark and have to go log into my account, navigate to the linux setup page and grab my generated Username key and Password key there.\nA credentials file on my docker host was generated by QBittorrent when running the first time.\n# download all availble server conf sudo wget https://my.surfshark.com/vpn/api/v1/server/configurations # cp the server you want to use into config folder /app/QBittorrent/config/openvpn Once you restart your qbittorrentvpn docker container you can test your vpn service with a torrent leak test. Use the + add torrent link button to Download the torrent and test that your VPN service is connected and working.\ntorrent-leak-test Magnet links # Use magnet link and item hash to avoid logins\nmagnet:?xt=urn:btih:${HASH} ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/qbittorent/","section":"Projects","summary":"What is qBittorrent? # qBittorrent is a awesome, simple app that allows you to use classic QBittorrent in a docker container. I use QBittorrent because I can set it up with a VPN that only connects via the container directly. Effectively separating it from the rest of my network and allowing me to continue as normal while it is downloading. It will automatically stop the network if the VPN is not functioning correctly.\n","title":"qBittorrent","type":"projects"},{"content":" What is uptime-kuma? # uptime-kuma is a neat little web monitoring application. Lotta dope things right out of the box, very gui / user friendly. Pretty much just add the stack, update the dir for config - and it works. Integrates with discord webhooks, great easy status page and dashboard.\nDocker Compose Example # # uptime-kuma - docker compose --- # https://github.com/louislam/uptime-kuma/wiki/%F0%9F%94%A7-How-to-Install version: \u0026#39;3.3\u0026#39; services: uptime-kuma: container_name: uptime-kuma image: louislam/uptime-kuma:1 volumes: - /app/uptime-kuma/data:/app/data ports: - 3001:3001 # \u0026lt;Host Port\u0026gt;:\u0026lt;Container Port\u0026gt; restart: always ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/uptime-kuma/","section":"Projects","summary":"What is uptime-kuma? # uptime-kuma is a neat little web monitoring application. Lotta dope things right out of the box, very gui / user friendly. Pretty much just add the stack, update the dir for config - and it works. Integrates with discord webhooks, great easy status page and dashboard.\n","title":"uptime-kuma","type":"projects"},{"content":" What is webtop? # webtop is a awesome mini linux env I can use as a secure remote web-client for my home network.\nDocker Compose Example # # webtop - https://docs.linuxserver.io/images/docker-webtop/#lossless-mode --- services: webtop: image: lscr.io/linuxserver/webtop:latest container_name: webtop environment: - PUID=1000 - PGID=1000 - TZ=America/New_York - TITLE=Webtop #optional volumes: - /app/webtop/data:/config ports: - 7978:3000 - 7979:3001 restart: unless-stopped ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/webtop/","section":"Projects","summary":"What is webtop? # webtop is a awesome mini linux env I can use as a secure remote web-client for my home network.\n","title":"webtop","type":"projects"},{"content":" What is wikijs? # Wiki.js is a powerful, modern, and open-source wiki application built on Node.js. It is designed to be the central knowledge base for your home lab or professional projects, replacing traditional, clunky wiki platforms with a sleek, intuitive interface.\nI like it because of the useful Markdown editor that lets you nicely organize links, code etc. I can also backup the database to my NAS in nice MD files, so nothing gets lost if something is corrupted.\nDocker Compose Example # # wikijs - docker compose # https://github.com/linuxserver/docker-wikijs --- version: \u0026#34;3.8\u0026#34; services: wikijs: image: lscr.io/linuxserver/wikijs:latest container_name: wikijs environment: - PUID=0 - PGID=0 - TZ=Etc/UTC - DB_TYPE=sqlite #optional - DB_HOST= #optional - DB_PORT= #optional - DB_NAME= #optional - DB_USER= #optional - DB_PASS= #optional volumes: - /app/wiki/config:/config - /app/wiki/data:/data ports: - 3000:3000 restart: unless-stopped ","date":"4 February 2025","externalUrl":null,"permalink":"/projects/wikijs/","section":"Projects","summary":"What is wikijs? # Wiki.js is a powerful, modern, and open-source wiki application built on Node.js. It is designed to be the central knowledge base for your home lab or professional projects, replacing traditional, clunky wiki platforms with a sleek, intuitive interface.\n","title":"wikijs","type":"projects"},{"content":"Yo 👋\nIf you actually come to find this then props to ya man. Thanks for dropping by. idk if this will be worth ever doing, but this was fun for me to set up\u0026hellip; and who knows maybe this is of some use to others\u0026hellip; so fuck it \u0026ndash;\u0026gt; Enjoy.\n","date":"2 February 2025","externalUrl":null,"permalink":"/posts/firstpost/","section":"Posts","summary":"Yo 👋\nIf you actually come to find this then props to ya man. Thanks for dropping by. idk if this will be worth ever doing, but this was fun for me to set up… and who knows maybe this is of some use to others… so fuck it \u003e Enjoy.\n","title":"First Post","type":"posts"},{"content":"this has changed a bit\u0026hellip; I will update later\u0026hellip;\nCamera Gear # Sony Alpha a6400 Sigma 16mm f/1.4 Gonine AC-PW20 NP-FW50 Dummy Battery Elgato Cam Link 4K Pixel Desk Camera Mount Stand Quick Release Plate Camera Tripod Mount Micro HDMI to HDMI Adapter Cable Lighting # GVM 1000D RGB Led Video Light Audio # Shure SM7b - Vocal Dynamic Microphone Shure SM57-LCE Cardioid Dynamic Mic Shure A2WS-BLK - pop filter PreSonus Revelator io24 Mic Arm Desk Mount Brainwavz XL Micro Suede Memory Foam Earpads IRL Setup # Alex Tech 10ft - 1/2 inch Cord Protector SIM Card Adapter Nano Micro NDI HDMI Encoder, TBS2603SE NDI USB to DC Convert Cable Cudy N300 WiFi Unlocked 4G LTE Sony FDR-X1000V/W 4K Action Cam Backpack Shoulder Chest Strap Clip Mount Bicycle \u0026amp; Motorcycle Phone Mount Portable Charger Power Bank Game PC # WD_BLACK 2TB SN850 NVMe Corsair Vengeance LPX 32GB - Mem Corsair Carbide Series Air 740 CORSAIR Hydro Series H115i ASUS ROG STRIX GeForce GTX 1080 Intel Core i7-7700K Asus Z170-A - MOBO Peripheral # Logitech G Pro Wireless Gaming Mouse Acer Predator XB272 bmiprz 27\u0026quot; CORSAIR K70 RGB MK.2 RAPIDFIRE Speakers - PreSonus Eris Single Monitor Desk Mount - Adjustable Gas Spring Stream PC # G.SKILL TridentZ Series 16GB Elgato Stream Deck AMD YD180XBCAEWOF Ryzen 7 1800X Nvidia GeForce GTX 1080 ASUS Prime X370-Pro - MOBO ","date":"12 July 2024","externalUrl":null,"permalink":"/stream/stream_gear/","section":"Stream","summary":"this has changed a bit… I will update later…\n","title":"Gear","type":"stream"},{"content":"","date":"12 July 2024","externalUrl":null,"permalink":"/stream/","section":"Stream","summary":"","title":"Stream","type":"stream"},{"content":" Experience Special Projects Lead (DT L3) - Google October 2024 - Current HwOps Special Projects Lead; Resolve technical incidents and escalations by performing analysis utilizing existing data models or leveraging custom built data infrastructure to formulate and interpret data to reach specific conclusions and next steps. Develop detailed reports and intuitive dashboards, communicating key insights for data driven analysis. File bugs against products, documentation, and procedures by documenting desired behavior or steps to reproduce, and driving bugs to resolution. Suggest code-level resolutions for complex issues by leveraging tools, tool development and effective communication with stakeholders. Identify opportunities to build or enable solutions that improve, support or empower OMs, Site Leads \u0026 DTs to solve issues by using self-service tools and documentation. Fostered team growth through mentorship, training course facilitation, collaboration with internal training teams, and technical writing development. Data Center Technician (DT L2) - Google July 2023 - October 2024 Site Operations hardware maintenance and networking, resolving critical issues and collaborating cross-functionally to address SLO deviations. Built and led an internal escalation team for weekend/holiday support, creating resources and onboarding leaders. Developed and documented new processes, championed project documentation, and contributed to technician hiring, onboarding, and training. Mentored Googlers and facilitated training programs. Transitioned into Leader Role as a Maintenance Lead / Escalation Point of Contact. Consulting / Freelance / Helping out Family Porch Light Properties LLC Jun 2020 - Jul 2023 Developed and implemented long-term systems, development, and planning strategies, including rebranding initiatives. Served as Hiring Manager, overseeing onboarding, system administration, and policy management. Managed social media, website development/design, SEO, and marketing campaigns (including Facebook Ads). Utilized Google Analytics and oversaw technology/security initiatives and traditional marketing. Videography, Photography, Film Media, Drone Services Jun 2020 - Jan 2023 Tabora Farm and Winery Dec 2019 - May 2020 Managed social media, website development/design, and SEO. Oversaw tax and licensing compliance for interstate wine shipments. Work Experience Continued Operations Engineer - Twitter Inc. Apr 2015 - Sep 2017 Led site operations teams and provided on-call support for multiple data centers/POPs, consistently exceeding SLOs goals. Managed, hired, onboarded, and trained operations engineers and staff. Served as a Twitter liaison and brand ambassador, reporting on emerging technologies. Provided on-site support to engineering teams, proactively monitored services, and managed projects including repairs, decommissioning, upgrades, installations, networking, and maintenance. Operations Technician (OTA/OA) - Google Nov 2012 - Mar 2015 Supported multiple sites on critical infrastructure projects, including server repairs, hardware qualifications, QA, NPI, HAT, disk sanitization, project management, decommissioning, upgrades, and backup library maintenance. Collaborated effectively to maintain Google's infrastructure and ensure operational excellence. Education Pennridge High School 2004 - 2009 Georgia State University - Computer Science 2012 - 2017 Skills Certificates Leadership, Mentoring UNIX / Linux / OS Networking, TCP/IP, DNS, DHCP Technical Writing \u0026 Documentation SQL, HTML, CSS, JS Java, Golang, Shell Scripting Docker, VMs, Baremetal LPIC-1 - Linux Professional Institute SUSE Certified Linux Administrator (SUSE CLA) Small Unmanned Aircraft System (Part 107) PDF Download ","date":"12 July 2024","externalUrl":null,"permalink":"/resume/","section":"HOME","summary":" Experience Special Projects Lead (DT L3) - Google October 2024 - Current HwOps Special Projects Lead; Resolve technical incidents and escalations by performing analysis utilizing existing data models or leveraging custom built data infrastructure to formulate and interpret data to reach specific conclusions and next steps. Develop detailed reports and intuitive dashboards, communicating key insights for data driven analysis. File bugs against products, documentation, and procedures by documenting desired behavior or steps to reproduce, and driving bugs to resolution. Suggest code-level resolutions for complex issues by leveraging tools, tool development and effective communication with stakeholders. Identify opportunities to build or enable solutions that improve, support or empower OMs, Site Leads \u0026 DTs to solve issues by using self-service tools and documentation. Fostered team growth through mentorship, training course facilitation, collaboration with internal training teams, and technical writing development. ","title":"Resume","type":"page"},{"content":" What is Pterodactyl? # Pterodactyl is a free, open-source game server management panel built with PHP, React, and Go. Designed with security in mind, Pterodactyl runs all game servers in isolated Docker containers while exposing a beautiful and intuitive UI to end users.\npterodactyl \u0026amp; ssl # ssl with pterodactyl is really annoying if you are using it behind a reverse proxy (nginx) - might be easier to run this on its own server so you can just use the default port 80 for web. reverse proxy is designed for normal web traffic, not game servers.\nIf you are annoying like me and wanna put things on a single server and save money\u0026hellip; here is what you can do.\nCreating SSL Certificates\nYoutube Guide\nWebserver Configuration\nNGINX Specific Configuration\nhttps://pterodactyl.io/panel/0.7/configuration.html # idk... couldn\u0026#39;t get it to work # OpenSSL Self-Signed Certificate Command: openssl req -sha256 -addext \u0026#34;subjectAltName = DNS:games.local\u0026#34; -newkey rsa:4096 -nodes -keyout privkeyselfsigned.pem -x509 -days 3650 -out fullchainselfsigned.pem # nginx-proxy-manager with cloudflare ssl cert setup # proxy side should be http # do not force ssl on cert side # go to http after getting to the site # .env file /var/www/pterodactyl/.env APP_URL=\u0026#34;http://domain\u0026#34; TRUSTED_PROXIES=* # you don\u0026#39;t have to do this - i\u0026#39;d rather not PTERODACTYL_TELEMETRY_ENABLED=false RECAPTCHA_ENABLED=false # config.yml /etc/pterodactyl/config.yml # use auto config remote: http: # nginx pterodactyl.conf /etc/nginx/sites-enabled/pterodactyl.conf # add to proxy-manager special settings proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; proxy_buffering off; proxy_request_buffering off; sudo systemctl restart nginx \u0026amp;\u0026amp; systemctl restart wings ","date":"4 May 2024","externalUrl":null,"permalink":"/projects/pterodactyl/","section":"Projects","summary":"What is Pterodactyl? # Pterodactyl is a free, open-source game server management panel built with PHP, React, and Go. Designed with security in mind, Pterodactyl runs all game servers in isolated Docker containers while exposing a beautiful and intuitive UI to end users.\n","title":"pterodactyl","type":"projects"},{"content":" What is a NAS? # A Network Attached Storage (NAS) device is essentially a small, self-contained computer that\u0026rsquo;s designed solely for storing and sharing files. Think of it as your own personal cloud storage, but instead of relying on a third-party service, you own and control the hardware.\nHere\u0026rsquo;s why someone might use a NAS:\nCentralized Storage: A NAS provides a single location to store all your files - documents, photos, videos, music, etc. This makes it easy to access your data from any device on your network. File Sharing: NAS devices make it simple to share files between multiple users and devices. This is great for families who want to share photos and videos, or for small businesses who need to collaborate on documents. Backup and Redundancy: Many NAS devices offer features like automatic backups and RAID configurations, which help protect your data from hard drive failures. Media Streaming: NAS devices can be used to stream media files (movies, music) to devices throughout your home, like smart TVs, game consoles, and mobile devices. Remote Access: Some NAS devices allow you to access your files remotely over the internet, so you can retrieve important documents or share photos even when you\u0026rsquo;re away from home. Essentially, a NAS is a versatile and convenient way to manage and share your digital data. It offers more control and privacy than cloud storage services, and it can be a valuable tool for both individuals and businesses.\nTrueNAS # TrueNAS is an Open Source NAS operating System / infrastructure solution. In addition to powerful scale-out storage capabilities, TrueNAS SCALE adds Linux Containers and VMs (KVM) so your organization can run workloads closer to data.\nWhy I switched # Recently I switched over to TrueNas from my off the shelf Terramaster device. I actually really liked the Terramaster, it allowed a 5 drive pool with raid 1 on a BTRFS filesystem. Which meant it was easy to upgrade the drives from 2TB \u0026ndash;\u0026gt; 6TB giving me a decent ~24TB size pool (one drive as parity). I got this originally so that I could safely back up my data and store my 10TB+ of VOD recordings from the Live Stream \u0026amp; Youtube. The Terramaster had some pretty big drawbacks. It was only really good for being a simple NAS share.\nThe proprietary operating System is actual hot garbage.\nthe GUI is extremely slow and freezes up a lot the built in docker containers and other special features rarely work the Recycle bin is hot garbage and runs even when you turn it off (discovered nothing had EVER been deleted) the underlying linux OS somehow struggles to do basic things like deleting files networking sometimes just broke, ignored static IPs and would ignore DNS due to not properly turning off ipv6 there is little to no documentation or support outside of Terramaster official forums, which is also hot garbage. Couple years later my data has continued to grow, including my jellyfin media and other hoarding, so I needed some space. This gave me a nice opertunity to upgrade. I have an older, but still nice PC sitting around as a spare, so this was a good chance to upgrade my NAS with some nice compute as well.\nWhy TrueNAS # I went with TrueNAS SCALE because it used the newer ZFS2 filesystem which allows for expansion of pools. This would allow me to buy some extra drives, move over my data and then expand using the old drive pool. SCALE also moved over to docker containerization. Side benefit of allowing me to host some extra containers if I want. Its also free and there is a lot of support / documentation out there. It has come a long way from the FreeBSD days.\nRefurbished Drives # I had some issues when getting sourcing drives. Things are still pretty expensive atm, so I went with just getting more 6TB and expanding the pool. Can upgrade size later when the prices chill out. Managed to find a good price on refurbished 6TB drives from amazon. However; when they arrived I found that they were all heavily used 4 years+ uptime, reused from some Datacenter somewhere. Fucking scummy Amazon seller. To top it off, some were SAS drives out of NETAPP appliances.\nFuck you Netapp Netapp is an older shit brand that would lock down their drives with special formatting that forced the customer to use only drives sourced from Netapp. These old Netapp appliances are starting to flood the market as newer / cheaper to run hardware is being deployed.\nSuccess Luckily, smart people can reformat the drives from their shit Data Integrity Feature (DIF) format back to a normal. This is a long and time consuming process (Took DAYS) as the entire drive has to be reformatted with a normal 512 chunk size.\nNote Thank you smart guy from reddit that pointed me to the smart guy on TrueNAS forum that showed me how to fix these un-usable drives.\nTrueNAS has sg_format built in. With this you can reformat all the drives at the same time.\n# formatting sg_format --format --size=512 /dev/sdb # progress sudo sg_turs --progress /dev/sdb Warning This still took multiple days with a 6TB drive :(\nSafely copying files # One problem I ran into was; how do I make sure everything is copied over safely from one pool to another? I could drag and drop folders, but that would have taken months and risked missing data. Best bet was to use rsync (which is also the fastest way to transfer). rsync has the added benefit of using checksum checking to verify all data is safely transferred with no errors. Luckily both systems were on linux, which made this easier.\nNote I started by doing this after logging into my old Terramaster NAS and performing the rsync operation from there. This was a bad idea and too longer, because the OS is slow and CPU can not handle handling all this plus 10GB networking at once. If you do this, do this from a system with a decent CPU.\nMount your systems together via the device with the best CPU #mount in fstab # \u0026lt;file system\u0026gt; \u0026lt;dir\u0026gt; \u0026lt;type\u0026gt; \u0026lt;options\u0026gt; \u0026lt;dump\u0026gt;\t\u0026lt;pass\u0026gt; nas:/mnt/md0/VODS /mnt/tnas/vods nfs defaults 0 0 Run rsync in the shell and move your folders using recursive options # Coping folders recursively with Progress \u0026amp; Stats sudo rsync -avh -A --no-perms --progress --stats /mnt/tnas/store/Backups/ /mnt/store/vault/Backups/ \u0026amp; Note rsync keeps logs and will run faster the next time around. Recommend running it a few times to add a extra verification that all your files have transferred.\nNote You can use --progress --stats and the \u0026amp; operator to send the job to the background. This will alow you to bring the job to the foreground whenever you want to check on progress. This is super useful when transferring terabytes of data.\nNote If doing this from TrueNAS, might be better to set this up as a one time cronjob. TrueNAS might kill this job if you lose connection to the shell while transferring.\nadd the job using the user interface (do not enable the job) run the job when you are ready to move files How to connect to a NAS # CIFS # Common Internet File System (CIFS) is a network file sharing protocol that allows applications on computers to read and write files and request other services from remote servers. Think of it as a way for your computer to talk to another computer (or storage device) to access files. It\u0026rsquo;s most commonly associated with Windows environments, but it\u0026rsquo;s used by other operating systems as well. It is relatively secure, requiring username / password login to remote systems.\nNote You might need this if you want to connect a Windows machine to one running linux like a common NAS (my use case).\nOne example of a use case is a jellyfin container that needs persistent data access for media (movies / tv shows) served from your nas. This will need this to be mounted to the OS docker is running on and pass this through with the volumes option in your docker compose file.\nTo add CIFS to Linux\nFor this you will also need the keyutils \u0026amp; cifs-utils packages. The keyutils package is a library and a set of utilities for accessing the kernel keyring facility. cifs-utils package provides a means for mounting SMB/CIFS shares on a Linux system.\napt-get install keyutils \u0026amp;\u0026amp; apt install cifs-utils -y Then we will need to mount the remote storage via fstab so that it will automatically mount to the OS every time the os boots.\ncreate a file in your home directory \u0026ldquo;~/.smb\u0026rdquo; vim ~/.smb Info The file should contain your NAS credentials (domain optional/depends on your nas settings)\nusername=NAS_USERNAME password=NAS_PASSWORD domain=NAS_DOMAIN_GROUP Create an entry in the fstab vim /etc/fstab Add an entry to the bottom line of the file # //{Nas_IP/Hostname}/{Nas_Mount_Point} /mnt/{mount_name_on_docker_os} cifs credentials=/[path_to_credentials].smb,x-systemd.automount 0 0 # Example: //nas.home/store /mnt/store cifs credentials=/home/wompmacho/.smb,x-systemd.automount 0 0 Save your file and re-mount all mount -a make sure your mount section of your docker compose matches the mount_name_on_docker_os and reboot your system # example: volumes: - /app/jellyfin/config:/config - /mnt/store:/data/store Success You can check that they are mounted by navigating to where you mounted the files\nwompmacho@docker:~$ cd /mnt/store/MediaServer/ Movies/ Music/ Torrent/ Tv Shows/ NFS # NFS (Network File System) is a distributed file system protocol that allows users to access files and directories over a network as if they were located on their local computer. It\u0026rsquo;s a way for your computer to talk to another computer (or storage device) to access files, similar to CIFS, but more commonly used in Unix/Linux environments.\nWarning There is NO SECURITY on NFS. It uses existing ACL groups to manage permissions. Only use this on a local network and for trusted devices.\nSuccess On TrueNAS you can limit access to on IP address or limit within your local domain.\nInfo One thing to consider when working with TrueNas:\nWhen creating the initial dataset in your pool, set the zfs aclmode on the dataset in question to passthrough. Special thanks to anodos you solved am issue plaguing me \u0026ndash;\u0026gt; Truenas Forum SMB # Server Message Block (SMB) is a network communication protocol that allows computers to share files, printers, and other resources with each other. It\u0026rsquo;s the foundation of file sharing in Windows environments, but it\u0026rsquo;s also used by other operating systems like macOS and Linux.\nTo connect to a SMB share on Windows:\nRight Click to add a Network Location Use the IP address or hostname of the NAS and the share path provided to your folder access For windows you will need to enter a username / password for access to the share\n","date":"4 May 2024","externalUrl":null,"permalink":"/projects/nas/","section":"Projects","summary":"NAS build and some tips and tricks to get things working with your docker containers","title":"NAS","type":"projects"},{"content":" IP Address # An Internet Protocol address (IP) address is a numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. Think of it like a street address for your computer on the internet. It\u0026rsquo;s how devices find each other and exchange information.\nHere\u0026rsquo;s a breakdown:\nNumerical Identifier: An IP address is a set of numbers, typically represented in dotted decimal notation (e.g., 192.168.1.1). There are two main versions: IPv4 (the older version) and IPv6 (the newer version, which uses a different format to accommodate more addresses). Device Identification: Every device that connects to a network (computers, smartphones, tablets, servers, etc.) needs a unique IP address to be identified and communicate. Location Information: While not precise, parts of an IP address can provide some general information about the device\u0026rsquo;s location. Routing: IP addresses are used by routers to direct network traffic to the correct destination. When you send data over the internet, routers use IP addresses to figure out where to send it. In short, an IP address is a crucial element of networking. It\u0026rsquo;s the unique identifier that allows devices to communicate with each other over a network, whether it\u0026rsquo;s a local network or the vast expanse of the internet.\nIPv4 \u0026amp; IPv6 # IPv4 and IPv6 are two versions of the Internet Protocol (IP), which is the fundamental protocol that enables devices to communicate over the internet. They are essentially addressing systems that allow devices to be uniquely identified and located on a network.\nHere\u0026rsquo;s a breakdown:\nIPv4 (Internet Protocol version 4): This is the original version of IP, using 32-bit addresses represented in dotted decimal notation (e.g., 192.168.1.1). It offers roughly 4.3 billion unique addresses. Due to the explosive growth of the internet, IPv4 addresses are now largely exhausted.\nIPv6 (Internet Protocol version 6): This is the newer version of IP, designed to address the limitations of IPv4. It uses 128-bit addresses represented in hexadecimal notation (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334). IPv6 offers a vastly larger address space, virtually eliminating the problem of address exhaustion.\nKey Differences and Why IPv6 is Needed:\nAddress Space: IPv4 has a limited number of addresses, while IPv6 offers a practically unlimited number. Address Format: IPv4 uses dotted decimal notation, while IPv6 uses hexadecimal notation. Automatic Configuration: IPv6 supports more advanced automatic configuration features, simplifying network management. Security: IPv6 includes built-in security features, such as IPSec, which enhances network security. In short, IPv4 is the older, widely used addressing system that is now facing address exhaustion. IPv6 is the newer, more robust addressing system designed to replace IPv4 and provide the necessary address space for the continued growth of the internet. The transition to IPv6 is ongoing.\nDHCP # Dynamic Host Configuration Protocol (DHCP) is a network management protocol that automates the process of assigning IP addresses and other network configuration parameters to devices on a network. Think of it as a way to automatically give each device on your network its own \u0026ldquo;address\u0026rdquo; so it can communicate with other devices and the internet.\nHere\u0026rsquo;s a breakdown:\nAutomatic IP Assignment: DHCP eliminates the need to manually configure IP addresses for each device on a network. This is especially useful in large networks where it would be tedious to assign addresses manually. Lease-Based System: DHCP uses a \u0026ldquo;lease\u0026rdquo; system, where IP addresses are assigned to devices for a specific period of time. When the lease expires, the device must renew it or the IP address becomes available for other devices. This helps ensure that IP addresses are used efficiently. Centralized Management: DHCP allows network administrators to manage IP addresses from a central server. This simplifies network administration and makes it easier to track which devices have which IP addresses. Other Configuration Parameters: In addition to IP addresses, DHCP can also provide other network configuration parameters, such as subnet mask, default gateway, and DNS server addresses. Why someone might use DHCP:\nSimplified Network Administration: DHCP makes it much easier to manage IP addresses in a network, especially in large networks. Reduced Configuration Errors: Manual IP address configuration can lead to errors, such as duplicate IP addresses, which can cause network conflicts. DHCP helps prevent these errors. Efficient IP Address Usage: The lease-based system ensures that IP addresses are used efficiently and that addresses that are no longer in use are reclaimed. Plug-and-Play Networking: DHCP allows devices to connect to a network and automatically receive the necessary network configuration, making it easier to add new devices to the network. In short, DHCP is a valuable tool for network administrators that simplifies IP address management and makes networks more efficient and reliable.\nStatic IP # A static IP address is a manually assigned IP address that remains constant for a specific device on a network. Unlike a dynamic IP address (assigned by DHCP), a static IP doesn\u0026rsquo;t change. This makes it useful for devices that need a consistent and predictable address, such as servers, printers, or network devices. However, it requires manual configuration and careful management to avoid IP address conflicts.\nDNS # Domain Name System (DNS) is essentially the phone book of the internet. It translates human-readable domain names (like google.com) into the numerical IP addresses (like 172.217.160.142) that computers use to communicate with each other.\nHere\u0026rsquo;s a breakdown:\nHuman-Friendly to Machine-Friendly: We remember names like \u0026ldquo;google.com\u0026rdquo; easily, but computers communicate using IP addresses. DNS bridges this gap by converting domain names into their corresponding IP addresses. Distributed Database: DNS is a massive, distributed database. It\u0026rsquo;s not stored in one single location, but rather spread across a network of servers around the world. This makes it robust and efficient. Hierarchical Structure: DNS is organized in a hierarchical structure, like a tree. This structure helps to manage the vast number of domain names and IP addresses. Resolution Process: When you type a domain name into your browser, your computer initiates a DNS resolution process. It queries various DNS servers to find the IP address associated with that domain name. Why someone might use DNS:\nEasy Access to Websites: DNS allows us to access websites by using easy-to-remember domain names instead of complex IP addresses. Email Delivery: DNS is also used to route email to the correct mail servers. Internet Functionality: DNS is a fundamental component of the internet, without which we wouldn\u0026rsquo;t be able to easily browse the web or send emails. In short, DNS is a critical part of the internet infrastructure. It\u0026rsquo;s the system that allows us to use domain names to access websites and other internet resources, making the internet user-friendly and accessible.\nPROXY # A proxy acts as an intermediary between a client (like your computer) and a server (like a website). Instead of your computer directly connecting to the server, it connects to the proxy server, which then forwards the request to the server. The server\u0026rsquo;s response comes back to the proxy, which then forwards it to your computer. Think of it like a middleman.\nHere\u0026rsquo;s a breakdown:\nIntermediary: The core function of a proxy is to act as a go-between for client and server. Hiding IP Address: One common use of a proxy is to mask the client\u0026rsquo;s IP address. The server sees the proxy\u0026rsquo;s IP address, not the client\u0026rsquo;s, providing a degree of anonymity. Caching: Proxies often cache frequently accessed content. If a client requests something that\u0026rsquo;s already in the cache, the proxy can serve it directly, speeding up access. Filtering and Security: Proxies can be used to filter content, block access to certain websites, or scan for malware. This is common in corporate environments. Load Balancing: In some situations, proxies can distribute traffic across multiple servers, helping to balance the load and improve performance. In short, a proxy server provides a layer of separation between clients and servers, offering a variety of benefits related to privacy, security, performance, and network management.\nReverse Proxy # A reverse proxy sits in front of one or more backend servers, intercepting client requests and forwarding them to the appropriate server. It acts as a gateway, but unlike a regular proxy (which protects clients), a reverse proxy protects the servers. Clients connect to the reverse proxy, which then handles the connection to the actual servers.\nHere\u0026rsquo;s a breakdown:\nServer Protection: Reverse proxies shield backend servers from direct exposure to the internet, enhancing security by preventing direct attacks. Load Balancing: They can distribute client traffic across multiple servers, preventing any single server from becoming overloaded. Caching: Reverse proxies can cache content, reducing the load on backend servers and speeding up response times for clients. SSL Termination: They can handle SSL encryption and decryption, offloading this task from the backend servers. URL Rewriting: Reverse proxies can modify URLs, making them more user-friendly or hiding the internal structure of the backend servers. In short, a reverse proxy acts as a gatekeeper for backend servers, providing a range of benefits related to security, performance, scalability, and flexibility. It\u0026rsquo;s a common component in modern web architectures.\nSSL # Secure Sockets Layer (SSL) is a security protocol that creates an encrypted connection between a web server and a web browser. This ensures that any data exchanged between them remains private and secure. Think of it as a secret tunnel that prevents eavesdropping and tampering.\nHere\u0026rsquo;s a breakdown:\nEncryption: SSL encrypts the data transmitted between the browser and the server, making it unreadable to anyone who might try to intercept it. This protects sensitive information like passwords, credit card numbers, and personal details. Authentication: SSL verifies the identity of the website, assuring users that they are connecting to the legitimate website and not a fake one. This helps prevent phishing attacks. Data Integrity: SSL ensures that the data transmitted between the browser and the server is not altered or corrupted during transit. This guarantees that the information received is exactly what was sent. In short, SSL is a crucial security technology that protects online communication and helps build trust between websites and their users. It\u0026rsquo;s the foundation of secure online transactions and a vital component of a safe internet experience.\n","date":"27 April 2024","externalUrl":null,"permalink":"/projects/networking/","section":"Projects","summary":"IP Address # An Internet Protocol address (IP) address is a numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. Think of it like a street address for your computer on the internet. Its how devices find each other and exchange information.\n","title":"networking","type":"projects"},{"content":"The Pi-hole is a DNS sinkhole that protects your devices from unwanted content, without installing any client-side software. Useful for blocking ad services at a DNS level. It uses a list of known ad services stored on github, can add your own. It can also operate as a internal dns router and dhcp server.\nPihole Setup # If you have a raspberry-pi or another device, its super easy to get things going.\npihole setup. Any debian based system should be able to get things going quickly. Then all you need to do is set your devices to use your pihole as the primary dns server.\nDebian based one-step install\ncurl -sSL https://install.pi-hole.net | bash Note I find this to be a little flaky when it comes to DNS, often times OS will need reboots and cache to be cleared in order to actively start using pihole DNS.\nBrowsers also store dns info so many things can conflict before your DNS switches over. I find that using Pihole as primary DHCP server forces your devices to use the correct DNS server and fixes a lot of problems.\nAlso keep in mind that ipv6 can interfere if you are like me and have a ISP provider that tries to force their DNS\nSetup on Proxmox VM # My pihole is operating as a Debian GNU/Linux 12 (bookworm) virtual machine running on Proxmox. I use it as a internal DNS router \u0026amp; DHCP server which makes dns much easier in my case - due to my internet provider trying to force me to use there dns servers. This setup is a little weird and in-order to get everything to work a couple extra steps are need.\nWill need to setup your pihole as a dhcp server, disable the existing dhcp server on the router, reserve static ip addressed for proxmox and pihole so that it can connect to the gateway, set the pihole as primary dns server on proxmox, set proxmox to use dhcp rather than static ip and finally set pihole VM to automatically boot first so that when it connects to the gateway devices connected to the gateway are issued ip address from the pihole.\nWarning If you are using pihole for DHCP / DNS, keep in mind if the device goes down that is hosting your pihole server, so will your DNS / DHCP. This May prevent you from connecting to your network until you re-enable a dhcp server such as the one in your router.\nreserve a ip address in router/gateway for proxmox server \u0026amp; pihole\nset pihole to enable DHCP\nset proxmox to get DHCP on boot rather than Static IP which is default\nroot@laptop-proxmox:~# cat /etc/network/interfaces auto lo iface lo inet loopback iface enp3s0 inet manual auto vmbr0 iface vmbr0 inet dhcp bridge-ports enp3s0 bridge-stp off bridge-fd 0 iface wlp4s0 inet manual set proxmox default DNS server to pihole reserved address\nset pihole to automatically start on boot with highest priority boot order\nset sattic ip and gateway info for pihole netwoking configuration '\nDisable DHCP server in gateway / router settings\nIf router has option to set default DNS, set to pihole reserved address\n","date":"27 April 2024","externalUrl":null,"permalink":"/projects/pihole/","section":"Projects","summary":"The Pi-hole is a DNS sinkhole that protects your devices from unwanted content, without installing any client-side software. Useful for blocking ad services at a DNS level. It uses a list of known ad services stored on github, can add your own. It can also operate as a internal dns router and dhcp server.\n","title":"pihole","type":"projects"},{"content":" OBS # Stream Settings # Ignore stream service settings Video Encoder x264 1920x1080 Rate control CBR Bitrate 8000 Kbps Keyframe Interval 1s CPU Usage Medium Profile None Tune None x264 Options keyinit=90 Recording Settings # Recoding Format .mkv Audio Track All Automatic File Spliting Split Time 240 min Video # Common FPS Values 60 Advanced # Recording Filename Formatting %MM-%DD-%CCYY_%A_%hh-%mm-%p_%FPS Automatically Remux to MP4 Camera # Sony A6400 Resolution for stream 3840x2160 Specs Max Resolution 6000 x 4000 Image Ratio 1:1, 3:2, 16:9 Sensor 25 megapixels CMOS APS-C (23.5 x 15.6 mm) ISO Auto, 100-32000 (expands to 102800) ","date":"25 January 2024","externalUrl":null,"permalink":"/stream/stream_settings/","section":"Stream","summary":"OBS # Stream Settings # Ignore stream service settings Video Encoder x264 1920x1080 Rate control CBR Bitrate 8000 Kbps Keyframe Interval 1s CPU Usage Medium Profile None Tune None x264 Options keyinit=90 Recording Settings # Recoding Format .mkv Audio Track All Automatic File Spliting Split Time 240 min Video # Common FPS Values 60 Advanced # Recording Filename Formatting %MM-%DD-%CCYY_%A_%hh-%mm-%p_%FPS Automatically Remux to MP4 Camera # Sony A6400 Resolution for stream 3840x2160 Specs Max Resolution 6000 x 4000 Image Ratio 1:1, 3:2, 16:9 Sensor 25 megapixels CMOS APS-C (23.5 x 15.6 mm) ISO Auto, 100-32000 (expands to 102800)","title":"OBS Settings","type":"stream"},{"content":" What is Docker? # Docker is an awesome platform that anyone hoping to get into software / development or any homelab-er should become familiar with. Docker is a platform designed to help developers build, share, and run container applications. The most important aspect to docker is its ability to be implemented in version control via simple config files. This allowing a team of people to share a code base working in the same environments consistently.\nAlmost anything can be deployed as a service via docker. It is a fantastic tool to learn about apps, software, test operating systems, do things like home automation, run web servers, media servers, host your own proxy/reverse proxy, email, dns, network monitoring, websites etc.\nDocker Environment Setup # I am doing things with Ubuntu, so for my case I will follow this docker.com - GUIDE for setting up the initial docker environment on my ubuntu machine and then running the test hello-world docker container app to verify that my docker environment is working.\nSet up Docker\u0026rsquo;s apt repository. # Add Docker\u0026#39;s official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg # Add the repository to Apt sources: echo \\ \u0026#34;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \\ $(. /etc/os-release \u0026amp;\u0026amp; echo \u0026#34;$VERSION_CODENAME\u0026#34;) stable\u0026#34; | \\ sudo tee /etc/apt/sources.list.d/docker.list \u0026gt; /dev/null sudo apt-get update Install the Docker packages. sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin Verify that the Docker Engine installation is successful by running the hello-world image. sudo docker run hello-world Docker Compose # .env Variables # Docker Files # Note dockerfile\nExample rebuild for mkdocs with some mods\nFROM squidfunk/mkdocs-material RUN pip install mkdocs-macros-plugin RUN pip install mkdocs-glightbox Mounting remote storage # ","date":"26 November 2023","externalUrl":null,"permalink":"/projects/docker/","section":"Projects","summary":"What is Docker? # Docker is an awesome platform that anyone hoping to get into software / development or any homelab-er should become familiar with. Docker is a platform designed to help developers build, share, and run container applications. The most important aspect to docker is its ability to be implemented in version control via simple config files. This allowing a team of people to share a code base working in the same environments consistently.\n","title":"Docker","type":"projects"},{"content":" 2024 Home Lab # 2023 Home Lab # 2020 Home Lab # 2019 Home Lab # ","date":"4 July 2023","externalUrl":null,"permalink":"/projects/lab_setup/","section":"Projects","summary":" 2024 Home Lab # ","title":"Lab Setup","type":"projects"},{"content":" Hi there 👋 # The goal of this site is to document my home apps, services, infrastructure and other projects I am working on. Additionally I am using it as a learning tool, to document things I already know, fill in some gaps, add to that knowledge and further refine my understanding on complicated topics. I find its much easier to master something if you are forced to explain it to someone else. With that spirit in mind, lemme see what I can float your way.\nFavorite Quote Never attribute to malice what can be explained by incompetence\n","date":"14 June 2023","externalUrl":null,"permalink":"/","section":"HOME","summary":"Hi there 👋 # The goal of this site is to document my home apps, services, infrastructure and other projects I am working on. Additionally I am using it as a learning tool, to document things I already know, fill in some gaps, add to that knowledge and further refine my understanding on complicated topics. I find its much easier to master something if you are forced to explain it to someone else. With that spirit in mind, lemme see what I can float your way.\n","title":"HOME","type":"page"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","externalUrl":null,"permalink":"/search/","section":"HOME","summary":"search","title":"Search","type":"page"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"},{"content":"","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"}]