Initialize project with clean ignore rules

This commit is contained in:
2026-03-09 04:08:19 +00:00
commit be6bf12d35
85 changed files with 4285 additions and 0 deletions

16
content/_index.md Normal file
View File

@@ -0,0 +1,16 @@
---
description: This is just the beginning...
title: HOME
date: 2023-06-14T09:37:13.298Z
lastmod: 2025-01-02
tags:
author: wompmacho
---
# Hi there 👋
The goal of this site is to document my home apps, services, infrastructure and other projects I am working on. Additionally I am using it as a learning tool, to document things I already know, fill in some gaps, add to that knowledge and further refine my understanding on complicated topics. I find its much easier to master something if you are forced to explain it to someone else. With that spirit in mind, lemme see what I can float your way.
> [!Success] Favorite Quote
> Never attribute to malice what can be explained by incompetence

5
content/posts/_index.md Normal file
View File

@@ -0,0 +1,5 @@
---
date: '2025-12-31T18:07:42Z'
draft: true
title: 'Posts'
---

View File

@@ -0,0 +1,11 @@
---
date: 2025-02-02
lastmod: 2026-01-06
author: "wompmacho"
authors: ["wompmacho"]
title: First Post
---
Yo 👋
If you actually come to find this then props to ya man. Thanks for dropping by. idk if this will be worth ever doing, but this was fun for me to set up... and who knows maybe this is of some use to others... so fuck it --> Enjoy.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 297 KiB

View File

@@ -0,0 +1,20 @@
---
date: 2026-03-08
lastmod: 2026-03-08
author: "wompmacho"
authors: ["wompmacho"]
title: Liberal
showHero: false # needed to hide "hero banner"
---
## Liberal is bad.
Wait, wasn't this group touting to be "liberal" once? Tell me more about why Liberal is bad. You know what… that person looks funny and bothers me and calls themself a liberal so I dont wanna be in that catagory anymore. No wait, actually lets just demonize a group because fox news says so. NO WAIT, let's not use words correctly because words, logic, facts... These things don't matter.
![what is a lib](Screenshot_20250205-234147.png)
## quit being a fucking sheep
this sums up my thoughts on the matter
{{< video src="RDT_20250206_005732.mp4" >}}

View File

@@ -0,0 +1,27 @@
---
title: Performance Reviews
description: Performance Reviews are dumb
date: 2025-02-08
lastmod: 2026-01-06
author: "wompmacho"
authors: ["wompmacho"]
---
The Hidden Flaw of Performance Reviews.
<!-- more -->
> [!quote] Goodhart's law
> [Goodhart's law](https://en.wikipedia.org/wiki/Goodhart%27s_law) is an adage often stated as, "When a measure becomes a target, it ceases to be a good measure". It is named after British economist Charles Goodhart, who is credited with expressing the core idea of the adage in a 1975 article on monetary policy in the United Kingdom:
>
> > Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.
>
> It was used to criticize the British Thatcher government for trying to conduct monetary policy on the basis of targets for broad and narrow money, but the law reflects a much more general phenomenon.
{{< rawhtml >}}
<div class="iframe-wrapper" style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<iframe width="" height="" src="https://www.youtube.com/embed/XQT8_SAwwUY?si=prH2i9-fEOHSyg2V" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
{{< /rawhtml >}}
In other words, when we use a measure to reward performance, we provide an incentive to manipulate the measure in order to receive the reward. This can sometimes result in actions that actually reduce the effectiveness of the measured system while paradoxically improving the measurement of system performance.

View File

@@ -0,0 +1,91 @@
---
title: Docker
description: Quick overview of docker and setup
date: 2023-11-26T01:14:53.675Z
lastmod: 2025-02-11
author: wompmacho
showHero: false # needed to hide "hero banner"
---
## What is Docker?
Docker is an awesome platform that anyone hoping to get into software /
development or any homelab-er should become familiar with. Docker is a platform
designed to help developers build, share, and run container applications. The
most important aspect to docker is its ability to be implemented in version
control via simple config files. This allowing a team of people to share a code
base working in the same environments consistently.
Almost anything can be deployed as a service via docker. It is a fantastic tool
to learn about apps, software, test operating systems, do things like home
automation, run web servers, media servers, host your own proxy/reverse proxy,
email, dns, network monitoring, websites etc.
{{< rawhtml >}}
<div class="iframe-wrapper" style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<iframe width="" height="" src="https://www.youtube.com/embed/NPguawVjbN0?si=rCFmobnPOrK9yl--" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
{{< /rawhtml >}}
---
## Docker Environment Setup
I am doing things with Ubuntu, so for my case I will follow this [docker.com -
GUIDE](https://docs.docker.com/engine/install/ubuntu/) for setting up the
initial docker environment on my ubuntu machine and then running the test
hello-world docker container app to verify that my docker environment is
working.
- Set up Docker's apt repository.
``` bash
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
```
- Install the Docker packages.
``` bash
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
```
- Verify that the Docker Engine installation is successful by running the
hello-world image.
``` bash
sudo docker run hello-world
```
---
## Docker Compose
## .env Variables
## Docker Files
> [!NOTE]
> [dockerfile](https://docs.docker.com/reference/dockerfile/)
>
> Example rebuild for mkdocs with some mods
>
> ```
> FROM squidfunk/mkdocs-material
> RUN pip install mkdocs-macros-plugin
> RUN pip install mkdocs-glightbox
> ```
## Mounting remote storage

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

View File

@@ -0,0 +1,292 @@
---
title: NAS
description: NAS
date: 2024-05-04
lastmod: 2024-05-11
author: wompmacho
summary: "NAS build and some tips and tricks to get things working with your docker containers"
showTableOfContents: true
showHero: false # needed to hide "hero banner"
---
## What is a NAS?
A Network Attached Storage (NAS) device is essentially a small, self-contained
computer that's designed solely for storing and sharing files. Think of it as
your own personal cloud storage, but instead of relying on a third-party
service, you own and control the hardware.
Here's why someone might use a NAS:
- **Centralized Storage:** A NAS provides a single location to store all your
files - documents, photos, videos, music, etc. This makes it easy to access
your data from any device on your network.
- **File Sharing:** NAS devices make it simple to share files between multiple
users and devices. This is great for families who want to share photos and
videos, or for small businesses who need to collaborate on documents.
- **Backup and Redundancy:** Many NAS devices offer features like automatic
backups and RAID configurations, which help protect your data from hard drive
failures.
- **Media Streaming:** NAS devices can be used to stream media files (movies,
music) to devices throughout your home, like smart TVs, game consoles, and
mobile devices.
- **Remote Access:** Some NAS devices allow you to access your files remotely
over the internet, so you can retrieve important documents or share photos
even when you're away from home.
Essentially, a NAS is a versatile and convenient way to manage and share your
digital data. It offers more control and privacy than cloud storage services,
and it can be a valuable tool for both individuals and businesses.
---
## TrueNAS
TrueNAS is an Open Source NAS operating System / infrastructure solution. In
addition to powerful scale-out storage capabilities, TrueNAS SCALE adds Linux
Containers and VMs (KVM) so your organization can run workloads closer to data.
### Why I switched
Recently I switched over to TrueNas from my off the shelf
[Terramaster](https://shop.terra-master.com/products/terramaster-f5-422-10gbe-nas-5-bay-network-storage-server-intel-quad-core-cpu-with-hardware-encryption-diskless)
device. I actually really liked the Terramaster, it allowed a 5 drive pool with
raid 1 on a BTRFS filesystem. Which meant it was easy to upgrade the drives from
2TB --> 6TB giving me a decent ~24TB size pool *(one drive as parity)*. I got
this originally so that I could safely back up my data and store my 10TB+ of VOD
recordings from the Live Stream & Youtube. The Terramaster had some pretty big
drawbacks. It was only really good for being a simple NAS share.
The proprietary operating System is actual hot garbage.
- the GUI is extremely slow and freezes up a lot
- the built in docker containers and other special features rarely work
- the Recycle bin is hot garbage and runs even when you turn it off (discovered
nothing had **EVER** been deleted)
- the underlying linux OS somehow struggles to do basic things like deleting
files
- networking sometimes just broke, ignored static IPs and would ignore DNS due
to not properly turning off ipv6
- there is little to no documentation or support outside of Terramaster official
forums, which is also hot garbage.
Couple years later my data has continued to grow, including my
[jellyfin](../homelab/containers/jellyfin.md) media and other hoarding, so I
needed some space. This gave me a nice opertunity to upgrade. I have an older,
but still nice PC sitting around as a spare, so this was a good chance to
upgrade my NAS with some nice compute as well.
### Why TrueNAS
I went with TrueNAS SCALE because it used the newer [ZFS2
filesystem](https://github.com/openzfs/zfs/releases) which allows for expansion
of pools. This would allow me to buy some extra drives, move over my data and
then expand using the old drive pool. SCALE also moved over to docker
containerization. Side benefit of allowing me to host some extra containers if I
want. Its also free and there is a lot of support / documentation out there. It
has come a long way from the FreeBSD days.
### Refurbished Drives
I had some issues when getting sourcing drives. Things are still pretty
expensive atm, so I went with just getting more 6TB and expanding the pool. *Can
upgrade size later when the prices chill out*. Managed to find a good price on
refurbished 6TB drives from amazon. **However**; when they arrived I found that
they were all heavily used 4 years+ uptime, reused from some Datacenter
somewhere. **Fucking scummy Amazon seller**. To top it off, some were SAS drives
out of NETAPP appliances.
> [!ERROR] **Fuck you Netapp**
>
> Netapp is an older *shit* brand that would lock down their drives with special
> formatting that forced the customer to use only drives sourced from Netapp.
> These old Netapp appliances are starting to flood the market as newer /
> cheaper to run hardware is being deployed.
> [!SUCCESS]
> Luckily, smart people can reformat the drives from their shit Data Integrity
> Feature (DIF) format back to a normal. This is a long and time consuming process
> *(Took DAYS)* as the entire drive has to be reformatted with a normal 512 chunk
> size.
> [!NOTE]
> Thank you [smart guy from reddit](https://reddit.com/r/truenas/comments/12w68uc/how_to_get_rid_of_data_integrity_feature_from/) that pointed me to the [smart guy on TrueNAS forum](https://www.truenas.com/community/threads/troubleshooting-disk-format-warnings-in-truenas-scale.106051/) that showed me how to fix these un-usable drives.
>
> TrueNAS has `sg_format` built in. With this you can reformat all the drives at the **same time**.
>
> ``` shell
> # formatting
> sg_format --format --size=512 /dev/sdb
>
> # progress
> sudo sg_turs --progress /dev/sdb
> ```
> [!warning]
> This still took multiple days with a 6TB drive :(
### Safely copying files
One problem I ran into was; how do I make sure everything is copied over safely from one pool to another? I could drag and drop folders, but that would have taken months and risked missing data. Best bet was to use `rsync` *(which is also the fastest way to transfer)*. `rsync` has the added benefit of using checksum checking to verify all data is safely transferred with no errors. Luckily both systems were on linux, which made this easier.
> [!NOTE]
> I started by doing this after logging into my old Terramaster NAS and performing the `rsync` operation from there. This was a bad idea and too longer, because the OS is slow and CPU can not handle handling all this plus 10GB networking at once. If you do this, do this from a system with a decent CPU.
- Mount your systems together via the device with the best CPU
``` bash
#mount in fstab
# <file system> <dir> <type> <options> <dump> <pass>
nas:/mnt/md0/VODS /mnt/tnas/vods nfs defaults 0 0
```
- Run `rsync` in the shell and move your folders using recursive options
``` bash
# Coping folders recursively with Progress & Stats
sudo rsync -avh -A --no-perms --progress --stats /mnt/tnas/store/Backups/ /mnt/store/vault/Backups/ &
```
> [!NOTE]
> `rsync` keeps logs and will run faster the next time around. Recommend running it a few times to add a extra verification that all your files have transferred.
> [!NOTE]
> You can use `--progress --stats` and the `&` operator to send the job to the background. This will alow you to bring the job to the foreground whenever you want to check on progress. This is super useful when transferring terabytes of data.
> [!NOTE]
> If doing this from TrueNAS, might be better to set this up as a one time cronjob. TrueNAS might kill this job if you lose connection to the shell while transferring.
> - add the job using the user interface *(do not enable the job)*
>
> ![adding a cron job](creating_a_cronjob_with_truenas.png)
>
> - run the job when you are ready to move files
>
> ![running a cron job](running_cron_job.png)
### How to connect to a NAS
#### CIFS
Common Internet File System (CIFS) is a network file sharing protocol that
allows applications on computers to read and write files and request other
services from remote servers. Think of it as a way for your computer to talk to
another computer (or storage device) to access files. It's most commonly
associated with Windows environments, but it's used by other operating systems
as well. It is relatively secure, requiring username / password login to remote
systems.
> [!NOTE]
> You might need this if you want to connect a Windows machine to one
running linux like a common NAS *(my use case)*.
One example of a use case is a jellyfin container that needs persistent data
access for media (movies / tv shows) served from your nas. This will need this
to be mounted to the OS docker is running on and pass this through with the
volumes option in your docker compose file.
**To add CIFS to Linux**
For this you will also need the `keyutils` & `cifs-utils` packages. The `keyutils`
package is a library and a set of utilities for accessing the kernel keyring
facility. `cifs-utils` package provides a means for mounting SMB/CIFS shares on a
Linux system.
``` bash
apt-get install keyutils && apt install cifs-utils -y
```
Then we will need to mount the remote storage via fstab so that it will
automatically mount to the OS every time the os boots.
- create a file in your home directory "~/.smb"
``` bash
vim ~/.smb
```
> [!INFO]
> The file should contain your NAS credentials *(domain optional/depends
on your nas settings)*
>
> ``` bash
> username=NAS_USERNAME
> password=NAS_PASSWORD
> domain=NAS_DOMAIN_GROUP
> ```
- Create an entry in the fstab
``` bash
vim /etc/fstab
```
- Add an entry to the bottom line of the file
``` bash
# //{Nas_IP/Hostname}/{Nas_Mount_Point} /mnt/{mount_name_on_docker_os} cifs credentials=/[path_to_credentials].smb,x-systemd.automount 0 0
# Example:
//nas.home/store /mnt/store cifs credentials=/home/wompmacho/.smb,x-systemd.automount 0 0
```
- Save your file and re-mount all
``` bash
mount -a
```
- make sure your mount section of your docker compose matches the
`mount_name_on_docker_os` and reboot your system
``` bash
# example:
volumes:
- /app/jellyfin/config:/config
- /mnt/store:/data/store
```
> [!SUCCESS]
>
> You can check that they are mounted by navigating to where you mounted the files
>
> ``` bash
> wompmacho@docker:~$ cd /mnt/store/MediaServer/
> Movies/ Music/ Torrent/ Tv Shows/
> ```
#### NFS
NFS (Network File System) is a distributed file system protocol that allows users to access files and directories over a network as if they were located on their local computer. It's a way for your computer to talk to another computer (or storage device) to access files, similar to CIFS, but more commonly used in Unix/Linux environments.
> [!WARNING]
> There is `NO SECURITY` on NFS. It uses existing ACL groups to manage permissions. Only use this on a local network and for trusted devices.
> [!SUCCESS]
> On TrueNAS you can limit access to on IP address or limit within your local domain.
>
> ![alt text](truenas_nfs_limit_to_local_network.png)
> [!Info]
> One thing to consider when working with TrueNas:
> - When creating the initial dataset in your pool, set the zfs **aclmode** on the dataset in question to `passthrough`.
> - Special thanks to `anodos` you solved am issue plaguing me --> [Truenas Forum](https://www.truenas.com/community/threads/cannot-chmod-nfs-operation-not-permitted.97247/)
---
#### SMB
Server Message Block (SMB) is a network communication protocol that allows computers to share files, printers, and other resources with each other. It's the foundation of file sharing in **Windows** environments, but it's also used by other operating systems like macOS and Linux.
**To connect to a SMB share on Windows:**
- Right Click to add a `Network Location`
![adding a network location](adding_smb_windows.png)
- Use the IP address or hostname of the NAS and the share path provided to your folder access
![selecting the network path for smb](selecting_network_path_for_smb.png)
> [!NOTE]
> For windows you will need to enter a username / password for access to the share

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

View File

@@ -0,0 +1,245 @@
---
title: networking
description: DHCP, DNS, PROXY
author: wompmacho
date: 2024-04-27T23:53:26.059Z
lastmod: 2025-02-08
editor: markdown
---
## IP Address
An Internet Protocol address (IP) address is a numerical label assigned to each
device connected to a computer network that uses the Internet Protocol for
communication. Think of it like a street address for your computer on the
internet. It's how devices find each other and exchange information.
Here's a breakdown:
* **Numerical Identifier:** An IP address is a set of numbers, typically
represented in dotted decimal notation (e.g., 192.168.1.1). There are two
main versions: IPv4 (the older version) and IPv6 (the newer version, which
uses a different format to accommodate more addresses).
* **Device Identification:** Every device that connects to a network (computers,
smartphones, tablets, servers, etc.) needs a unique IP address to be
identified and communicate.
* **Location Information:** While not precise, parts of an IP address can
provide some general information about the device's location.
* **Routing:** IP addresses are used by routers to direct network traffic to the
correct destination. When you send data over the internet, routers use IP
addresses to figure out where to send it.
In short, an IP address is a crucial element of networking. It's the unique
identifier that allows devices to communicate with each other over a network,
whether it's a local network or the vast expanse of the internet.
---
## IPv4 & IPv6
IPv4 and IPv6 are two versions of the Internet Protocol (IP), which is the
fundamental protocol that enables devices to communicate over the internet.
They are essentially addressing systems that allow devices to be uniquely
identified and located on a network.
Here's a breakdown:
* **IPv4 (Internet Protocol version 4):** This is the original version of IP,
using 32-bit addresses represented in dotted decimal notation (e.g.,
192.168.1.1). It offers roughly 4.3 billion unique addresses. Due to the
explosive growth of the internet, IPv4 addresses are now largely exhausted.
* **IPv6 (Internet Protocol version 6):** This is the newer version of IP,
designed to address the limitations of IPv4. It uses 128-bit addresses
represented in hexadecimal notation (e.g.,
2001:0db8:85a3:0000:0000:8a2e:0370:7334). IPv6 offers a vastly larger address
space, virtually eliminating the problem of address exhaustion.
Key Differences and Why IPv6 is Needed:
* **Address Space:** IPv4 has a limited number of addresses, while IPv6 offers a
practically unlimited number.
* **Address Format:** IPv4 uses dotted decimal notation, while IPv6 uses
hexadecimal notation.
* **Automatic Configuration:** IPv6 supports more advanced automatic
configuration features, simplifying network management.
* **Security:** IPv6 includes built-in security features, such as IPSec, which
enhances network security.
In short, IPv4 is the older, widely used addressing system that is now facing
address exhaustion. IPv6 is the newer, more robust addressing system designed to
replace IPv4 and provide the necessary address space for the continued growth of
the internet. The transition to IPv6 is ongoing.
---
## DHCP
Dynamic Host Configuration Protocol (DHCP) is a network management protocol that
automates the process of assigning IP addresses and other network configuration
parameters to devices on a network. Think of it as a way to automatically give
each device on your network its own "address" so it can communicate with other
devices and the internet.
Here's a breakdown:
* **Automatic IP Assignment:** DHCP eliminates the need to manually configure IP
addresses for each device on a network. This is especially useful in large
networks where it would be tedious to assign addresses manually.
* **Lease-Based System:** DHCP uses a "lease" system, where IP addresses are
assigned to devices for a specific period of time. When the lease expires, the
device must renew it or the IP address becomes available for other devices.
This helps ensure that IP addresses are used efficiently.
* **Centralized Management:** DHCP allows network administrators to manage IP
addresses from a central server. This simplifies network administration and
makes it easier to track which devices have which IP addresses.
* **Other Configuration Parameters:** In addition to IP addresses, DHCP can also
provide other network configuration parameters, such as subnet mask, default
gateway, and DNS server addresses.
Why someone might use DHCP:
* **Simplified Network Administration:** DHCP makes it much easier to manage IP
addresses in a network, especially in large networks.
* **Reduced Configuration Errors:** Manual IP address configuration can lead to
errors, such as duplicate IP addresses, which can cause network conflicts.
DHCP helps prevent these errors.
* **Efficient IP Address Usage:** The lease-based system ensures that IP
addresses are used efficiently and that addresses that are no longer in use
are reclaimed.
* **Plug-and-Play Networking:** DHCP allows devices to connect to a network and
automatically receive the necessary network configuration, making it easier to
add new devices to the network.
In short, DHCP is a valuable tool for network administrators that simplifies IP
address management and makes networks more efficient and reliable.
---
## Static IP
A static IP address is a manually assigned IP address that remains constant for
a specific device on a network. Unlike a dynamic IP address (assigned by DHCP),
a static IP doesn't change. This makes it useful for devices that need a
consistent and predictable address, such as servers, printers, or network
devices. However, it requires manual configuration and careful management to
avoid IP address conflicts.
---
## DNS
Domain Name System (DNS) is essentially the phone book of the internet. It
translates human-readable domain names (like **google.com**) into the numerical
IP addresses (like 172.217.160.142) that computers use to communicate with each
other.
Here's a breakdown:
* **Human-Friendly to Machine-Friendly:** We remember names like "google.com"
easily, but computers communicate using IP addresses. DNS bridges this gap by
converting domain names into their corresponding IP addresses.
* **Distributed Database:** DNS is a massive, distributed database. It's not
stored in one single location, but rather spread across a network of servers
around the world. This makes it robust and efficient.
* **Hierarchical Structure:** DNS is organized in a hierarchical structure, like
a tree. This structure helps to manage the vast number of domain names and IP
addresses.
* **Resolution Process:** When you type a domain name into your browser, your
computer initiates a DNS resolution process. It queries various DNS servers to
find the IP address associated with that domain name.
Why someone might use DNS:
* **Easy Access to Websites:** DNS allows us to access websites by using
easy-to-remember domain names instead of complex IP addresses.
* **Email Delivery:** DNS is also used to route email to the correct mail
servers.
* **Internet Functionality:** DNS is a fundamental component of the internet,
without which we wouldn't be able to easily browse the web or send emails.
In short, DNS is a critical part of the internet infrastructure. It's the system
that allows us to use domain names to access websites and other internet
resources, making the internet user-friendly and accessible.
---
## PROXY
A proxy acts as an intermediary between a client (like your computer) and a
server (like a website). Instead of your computer directly connecting to the
server, it connects to the proxy server, which then forwards the request to the
server. The server's response comes back to the proxy, which then forwards it to
your computer. Think of it like a middleman.
Here's a breakdown:
* **Intermediary:** The core function of a proxy is to act as a go-between for
client and server.
* **Hiding IP Address:** One common use of a proxy is to mask the client's IP
address. The server sees the proxy's IP address, not the client's, providing a
degree of anonymity.
* **Caching:** Proxies often cache frequently accessed content. If a client
requests something that's already in the cache, the proxy can serve it
directly, speeding up access.
* **Filtering and Security:** Proxies can be used to filter content, block
access to certain websites, or scan for malware. This is common in corporate
environments.
* **Load Balancing:** In some situations, proxies can distribute traffic across
multiple servers, helping to balance the load and improve performance.
In short, a proxy server provides a layer of separation between clients and
servers, offering a variety of benefits related to privacy, security,
performance, and network management.
---
## Reverse Proxy
A reverse proxy sits in front of one or more backend servers, intercepting
client requests and forwarding them to the appropriate server. It acts as a
gateway, but unlike a regular proxy (which protects clients), a reverse proxy
protects the servers. Clients connect to the reverse proxy, which then handles
the connection to the actual servers.
Here's a breakdown:
* **Server Protection:** Reverse proxies shield backend servers from direct
exposure to the internet, enhancing security by preventing direct attacks.
* **Load Balancing:** They can distribute client traffic across multiple
servers, preventing any single server from becoming overloaded.
* **Caching:** Reverse proxies can cache content, reducing the load on backend
servers and speeding up response times for clients.
* **SSL Termination:** They can handle SSL encryption and decryption, offloading
this task from the backend servers.
* **URL Rewriting:** Reverse proxies can modify URLs, making them more
user-friendly or hiding the internal structure of the backend servers.
In short, a reverse proxy acts as a gatekeeper for backend servers, providing a
range of benefits related to security, performance, scalability, and
flexibility. It's a common component in modern web architectures.
## SSL
Secure Sockets Layer (SSL) is a security protocol that creates an encrypted
connection between a web server and a web browser. This ensures that any data
exchanged between them remains private and secure. Think of it as a secret
tunnel that prevents eavesdropping and tampering.
Here's a breakdown:
* **Encryption:** SSL encrypts the data transmitted between the browser and the
server, making it unreadable to anyone who might try to intercept it. This
protects sensitive information like passwords, credit card numbers, and
personal details.
* **Authentication:** SSL verifies the identity of the website, assuring users
that they are connecting to the legitimate website and not a fake one. This
helps prevent phishing attacks.
* **Data Integrity:** SSL ensures that the data transmitted between the browser
and the server is not altered or corrupted during transit. This guarantees
that the information received is exactly what was sent.
In short, SSL is a crucial security technology that protects online
communication and helps build trust between websites and their users. It's the
foundation of secure online transactions and a vital component of a safe
internet experience.

View File

@@ -0,0 +1,108 @@
---
title: pihole
description: pihole
author: wompmacho
date: 2024-04-27T23:53:26.059Z
lastmod: 2025-02-08
editor: markdown
showHero: false # needed to hide "hero banner"
---
The Pi-hole is a DNS sinkhole that protects your devices from unwanted content,
without installing any client-side software. Useful for blocking ad services at
a DNS level. It uses a list of known ad services stored on github, can add your
own. It can also operate as a internal dns router and dhcp server.
## Pihole Setup
If you have a raspberry-pi or another device, its super easy to get things
going.
- [pihole setup](https://github.com/pi-hole/pi-hole/?tab=readme-ov-file#one-step-automated-install).
Any debian based system should be able to get things going quickly. Then all you
need to do is set your devices to use your pihole as the **primary** dns server.
**Debian based one-step install**
``` bash
curl -sSL https://install.pi-hole.net | bash
```
> [!NOTE]
> I find this to be a little flaky when it comes to DNS, often times OS will need reboots and cache to be cleared in order to actively start using pihole DNS.
>
> Browsers also store dns info so many things can conflict before your DNS
switches over. I find that using Pihole as primary DHCP server forces your
devices to use the correct DNS server and fixes a lot of problems.
>
> Also keep in mind that **ipv6** can interfere if you are like me and have a ISP
provider that tries to force their DNS
---
## Setup on Proxmox VM
My pihole is operating as a Debian GNU/Linux 12 (bookworm) virtual machine
running on Proxmox. I use it as a internal DNS router & DHCP server which makes
dns much easier in my case - due to my internet provider trying to force me to
use there dns servers. This setup is a little weird and in-order to get
everything to work a couple extra steps are need.
Will need to setup your pihole as a dhcp server, disable the existing dhcp
server on the router, reserve static ip addressed for proxmox and pihole so that
it can connect to the gateway, set the pihole as primary dns server on proxmox,
set proxmox to use dhcp rather than static ip and finally set pihole VM to
automatically boot first so that when it connects to the gateway devices
connected to the gateway are issued ip address from the pihole.
> [!WARNING]
> If you are using pihole for DHCP / DNS, keep in mind if the device goes down
that is hosting your pihole server, so will your DNS / DHCP. This May prevent
you from connecting to your network until you re-enable a dhcp server such as
the one in your router.
- reserve a ip address in router/gateway for proxmox server & pihole
![reserved_ip_example.png](reserved_ip_example.png)
- set pihole to enable DHCP
![pihole_dhcp_example.png](pihole_dhcp_example.png)
- set proxmox to get DHCP on boot rather than Static IP which is default
``` bash
root@laptop-proxmox:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface enp3s0 inet manual
auto vmbr0
iface vmbr0 inet dhcp
bridge-ports enp3s0
bridge-stp off
bridge-fd 0
iface wlp4s0 inet manual
```
- set proxmox default DNS server to pihole reserved address
![proxmox_dns_example.png](proxmox_dns_example.png)
- set pihole to automatically start on boot with highest priority boot order
![pihole_boot_order.png](pihole_boot_order.png)
- set sattic ip and gateway info for pihole netwoking configuration '
![pihole_network_settings_proxmox_example.png](pihole_network_settings_proxmox_example.png)
- Disable DHCP server in gateway / router settings
![router_disable_dhcp_example.png](router_disable_dhcp_example.png)
- If router has option to set default DNS, set to pihole reserved address

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -0,0 +1,72 @@
---
title: pterodactyl
description: pterodactyl
author: wompmacho
date: 2024-05-04T20:00:25.298Z
lastmod: 2025-02-08
showHero: false # needed to hide "hero banner"
---
## What is Pterodactyl?
[Pterodactyl](https://pterodactyl.io/) is a free, open-source game server management panel built with PHP,
React, and Go. Designed with security in mind, Pterodactyl runs all game servers
in isolated Docker containers while exposing a beautiful and intuitive UI to end
users.
## pterodactyl & ssl
ssl with pterodactyl is really annoying if you are using it behind a reverse
proxy (nginx) - might be easier to run this on its own server so you can just
use the default port 80 for web. reverse proxy is designed for normal web
traffic, not game servers.
If you are annoying like me and wanna put things on a single server and save
money... here is what you can do.
- [Creating SSL
Certificates](https://pterodactyl.io/tutorials/creating_ssl_certificates.html#method-2:-acme.sh-(using-cloudflare-api))
- [Youtube Guide](https://www.youtube.com/watch?v=cbr8tddvAWw)
- [Webserver
Configuration](https://pterodactyl.io/panel/1.0/webserver_configuration.html#nginx-with-ssl)
- [NGINX Specific
Configuration](https://pterodactyl.io/panel/1.0/additional_configuration.html#nginx-specific-configuration)
```bash
https://pterodactyl.io/panel/0.7/configuration.html
# idk... couldn't get it to work
# OpenSSL Self-Signed Certificate Command:
openssl req -sha256 -addext "subjectAltName = DNS:games.local" -newkey rsa:4096 -nodes -keyout privkeyselfsigned.pem -x509 -days 3650 -out fullchainselfsigned.pem
# nginx-proxy-manager with cloudflare ssl cert setup
# proxy side should be http
# do not force ssl on cert side
# go to http after getting to the site
# .env file
/var/www/pterodactyl/.env
APP_URL="http://domain"
TRUSTED_PROXIES=*
# you don't have to do this - i'd rather not
PTERODACTYL_TELEMETRY_ENABLED=false
RECAPTCHA_ENABLED=false
# config.yml
/etc/pterodactyl/config.yml
# use auto config remote: http:
# nginx pterodactyl.conf
/etc/nginx/sites-enabled/pterodactyl.conf
# add to proxy-manager special settings
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_buffering off;
proxy_request_buffering off;
sudo systemctl restart nginx && systemctl restart wings
```

View File

@@ -0,0 +1,4 @@
---
title: 'Projects'
layout: "card"
---

View File

@@ -0,0 +1,52 @@
---
title: audiobookshelf
description: Quick overview of audiobookshelf and setup
date: 2025-02-04
lastmod: 2025-02-04
author: wompmacho
showHero: false # needed to hide "hero banner"
---
## What is Audiobookshelf?
`Audiobookshelf` is a self-hosted, open-source server designed to manage and stream your personal audiobook and podcast collections. It acts as a private, web-based alternative to services like Audible, giving you full control over your media files. Key features include:
## Docker Compose Example
> [!IMPORTANT] Audiobookshelf requires a websocket connection.
``` yaml
# audiobookshelf - https://github.com/advplyr/audiobookshelf/blob/master/docker-compose.yml
---
version: "3.7"
services:
audiobookshelf:
container_name: audiobookshelf
image: ghcr.io/advplyr/audiobookshelf:latest
# ABS runs on port 13378 by default. If you want to change
# the port, only change the external port, not the internal port
ports:
- 13378:80
volumes:
# These volumes are needed to keep your library persistent
# and allow media to be accessed by the ABS server.
# The path to the left of the colon is the path on your computer,
# and the path to the right of the colon is where the data is
# available to ABS in Docker.
# You can change these media directories or add as many as you want
- /mnt/store/MediaServer/Audio_Books:/audiobooks
- /mnt/store/MediaServer/podcasts:/podcasts
- /mnt/store/app/audiobookshelf/metadata:/metadata
# The config directory needs to be on the same physical machine
# you are running ABS on
- /app/audiobookshelf/config:/config
restart: unless-stopped
# You can use the following environment variable to run the ABS
# docker container as a specific user. You will need to change
# the UID and GID to the correct values for your user.
#environment:
# - user=1000:1000
```

View File

@@ -0,0 +1,53 @@
---
title: code-server
description: Quick overview of code-server and setup
date: 2025-02-04
lastmod: 2025-02-04
author: wompmacho
showHero: false # needed to hide "hero banner"
---
## Whats is code-server?
`code-server` is a self-hosted instance of Visual Studio Code that runs on a remote server and is accessible directly through your web browser. It effectively turns any machine with a CPU and RAM into a fully functional cloud-based development environment.
## Docker Compose Example
``` yaml
# code-server -- https://hub.docker.com/r/linuxserver/code-server
---
services:
code-server:
image: lscr.io/linuxserver/code-server:latest
container_name: code-server
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- PASSWORD=password #optional
- HASHED_PASSWORD= #optional
- SUDO_PASSWORD=password #optional
- SUDO_PASSWORD_HASH= #optional
- PROXY_DOMAIN=code-server.domain.com #optional
- DEFAULT_WORKSPACE=/apps #optional
volumes:
- code-server-nfs:/config
- apps:/apps
ports:
- 8443:8443
restart: unless-stopped
volumes:
code-server-nfs:
name: code-server-nfs
driver_opts:
type: nfs
o: addr=truenas,nolock,soft,rw
device: :/mnt/store/vault/app/code-server
apps:
name: apps
driver_opts:
type: nfs
o: addr=truenas,nolock,soft,rw
device: :/mnt/store/vault/app/
```

View File

@@ -0,0 +1,107 @@
---
title: frigate
description: frigate dvr
author: wompmacho
date: 2025-02-04
lastmod: 2025-02-04
showHero: false # needed to hide "hero banner"
---
## What is Frigate?
`Frigate` is a complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
```yaml
# frigate - https://docs.frigate.video/frigate/installation/
---
version: "3.9"
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: "64mb" # update for your cameras based on calculation above
volumes:
- /etc/localtime:/etc/localtime:ro
- /app/frigate/config:/config
- /mnt/store/app/frigate:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "8554:8554" # RTSP feeds
- "8555:8555/tcp" # WebRTC over tcp
- "8555:8555/udp" # WebRTC over udp
environment:
FRIGATE_RTSP_PASSWORD: "password"
```
requires to set a `config.yml` in the `/config` volume.
## My current config
Am not currently running the optimized setup for this, but testing things out.
```yaml
mqtt:
enabled: false
cameras:
front:
birdseye:
order: 1
ffmpeg:
inputs:
- path: rtsp://USERNAME:PASSWORD@IPADDR:554/path_to_stream
roles:
- detect
- record
objects:
track:
- person
detect:
width: 1920
height: 1080
record:
sync_recordings: True
enabled: True
retain:
days: 7
mode: motion
events:
# Optional: Number of seconds before the event to include (default: shown below)
pre_capture: 5
# Optional: Number of seconds after the event to include (default: shown below)
post_capture: 5
detectors:
cpu1:
type: cpu
num_threads: 3
# Include all cameras by default in Birdseye view
birdseye:
enabled: True
mode: continuous
width: 1280
height: 720
quality: 8
inactivity_threshold: 30
```
## Proxy fixes
For nginx proxy - add this to advanced options for proxy host
```yaml
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_buffering off;
proxy_request_buffering off;
```

View File

@@ -0,0 +1,38 @@
---
title: homarr
description: homarr
author: wompmacho
date: 2025-02-04
lastmod: 2025-02-04
showHero: false # needed to hide "hero banner"
---
## What is homarr?
`homarr` is a nice little dashboard app that can be used to organize your homelab with a simple webpage interface. Great for Quick links, updates on weather, time, seeing if a server is down, monitoring your webcams or torrents, etc.
> [!NOTE]
> Personally I have moved on to [homepage](https://gethomepage.dev/). Looks a little nicer in my opinion. Not the biggest fan of homarr interface, Though I may try again after some updates.
## Docker Compose Example
> [!NOTE]
> For docker support, extend a volume to the docker.sock
> - `/var/run/docker.sock:/var/run/docker.sock`
```yaml
# homarr - docker compose
---
version: '3'
services:
homarr:
container_name: homarr
image: ghcr.io/ajnart/homarr:latest
restart: unless-stopped
volumes:
- /app/homarr/configs:/app/data/configs
- /app/homarr/icons:/app/public/icons
- /var/run/docker.sock:/var/run/docker.sock
ports:
- '7575:7575'
```

View File

@@ -0,0 +1,31 @@
---
title: homepage
description: homepage
author: wompmacho
date: 2025-02-04
lastmod: 2025-02-04
showHero: false # needed to hide "hero banner"
---
## What is homepage?
`homepage` is an open-source, highly customizable, and static site-based dashboard designed to organize your self-hosted services into a beautiful, central hub.
Unlike other dashboards that require complex databases or heavy backend services, homepage runs as a lightweight, Docker-based container that reads a single configuration file (YAML).
## Docker Compose Example
``` yaml
# homepage - docker compose
---
services:
homepage:
container_name: homepage
image: ghcr.io/gethomepage/homepage:latest
restart: unless-stopped
volumes:
- /mnt/store/app/homepage/configs:/app/config # Make sure your local config directory exists
- /var/run/docker.sock:/var/run/docker.sock # (optional) For docker integrations
ports:
- 7676:3000
```

View File

@@ -0,0 +1,52 @@
---
title: jellyfin
description: jellyfin
author: wompmacho
date: 2025-02-04
lastmod: 2025-02-04
showHero: false # needed to hide "hero banner"
---
## What is jellyfin?
`Jellyfin` is a media server. I like it because its simple, free, doesnt require
online accounts and lets you serve up your movies, tv shows and music. Is very
similar to apps like Plex and Emby. You can manage your media and auto download
things like episode names, artwork etc. Has plugin support and is basically
trying to be a better open source version of Plex. It has apps/support for
android, google tvs, firestick, iphone etc.
## Docker Compose Example
```yaml
# Jellyfin - docker compose
---
services:
jellyfin:
container_name: jellyfin
image: lscr.io/linuxserver/jellyfin:latest
environment:
- PUID=0
- PGID=0
- TZ=America/New_York
ports:
- 8096:8096
- 8920:8920 #optional https
- 7359:7359/udp #optional discovery
- 1900:1900/udp #optional discovery
volumes:
- /app/jellyfin:/config # config for your jellyfin
- /mnt/store/:/data/store # where your media lives (movies/tv etc.)
restart: unless-stopped
```
> [!NOTE]
> Recomend storing the metadata & cache on NAS and not on the OS docker host.
The files start to get LARGE for jellyfin due to mass amount of metadata stored
for media. Set this under the jellyfin `general` settings after jellyfin is
running. {.is-warning}
Once you server is running, head over to the to your opened port (docker_container_ip:8096) to start the setup proccess. When adding libraries -
select the content type, set the display name and then click the FOLDERS +
option. This is where you will select the path to your media that you set up in
the volumes.

View File

@@ -0,0 +1,31 @@
---
title: jellyseer
description: jellyseer
author: wompmacho
date: 2025-02-04
lastmod: 2025-02-04
showHero: false # needed to hide "hero banner"
---
## What is jellyseer?
`Jellyseerr` is a free, open-source, and highly intuitive media request management tool designed for the Jellyfin (and Plex/Emby) ecosystem. It essentially acts as a "gateway" between your users and your media server.
## Docker Compose Example
``` yaml
# jellyseerr - docker compose
---
services:
jellyseerr:
image: fallenbagel/jellyseerr:latest
container_name: jellyseerr
environment:
- LOG_LEVEL=debug
- TZ=America/New_York
ports:
- 5055:5055
volumes:
- /mnt/store/app/jellyseerr/config:/app/config
restart: unless-stopped
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

View File

@@ -0,0 +1,32 @@
---
title: Lab Setup
description: My Home Network and Projects
author: wompmacho
date: 2023-07-04T05:33:31.158Z
lastmod: 2026-03-08
showHero: false # needed to hide "hero banner"
---
---
## 2024 Home Lab
![sc_apartment_network_diagram_2024.png](sc_apartment_network_diagram_2024.png)
---
## 2023 Home Lab
![sc_apartment_network_diagram_nov_2023.png](sc_apartment_network_diagram_nov_2023.png)
---
## 2020 Home Lab
![nc_home_network_diagram_white_background_2020.jpg](nc_home_network_diagram_white_background_2020.jpg)
---
## 2019 Home Lab
![atl_home_network_2019.jpg](atl_home_network_2019.jpg)
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

View File

@@ -0,0 +1,45 @@
---
title: linkstacks
description: linkstacks
author: wompmacho
date: 2025-02-04
lastmod: 2025-02-04
showHero: false # needed to hide "hero banner"
---
## What is linkstacks?
`Linkstacks` is a nice little linktr.ee clone that allows you to set up a simple
link page. It can also be expanded to add multiple users and you can host
multiple people's pages with their own user accounts and everything.
## Docker Compose Example
``` yaml
# Linkstacks - docker compose
version: "3.8"
---
services:
linkstack:
container_name: 'linkstack'
hostname: 'linkstack'
image: 'linkstackorg/linkstack:latest'
user: '0:0'
environment:
TZ: 'America/New_york'
SERVER_ADMIN: 'SERVER_ADMIN_EMAIL'
HTTP_SERVER_NAME: 'HTTP_DOMAIN_NAME'
HTTPS_SERVER_NAME: 'HTTPS_DOMAIN_NAME'
LOG_LEVEL: 'info'
PHP_MEMORY_LIMIT: '256M'
UPLOAD_MAX_FILESIZE: '8M'
volumes:
- 'linkstack_data:/htdocs'
#- '/app/linkstack/:/htdocs'
ports:
- '8190:443'
restart: unless-stopped
volumes:
linkstack_data:
```

View File

@@ -0,0 +1,49 @@
---
title: mkdocs
description: mkdocs
author: wompmacho
date: 2025-02-04
lastmod: 2025-02-04
showHero: false # needed to hide "hero banner"
---
## What is mkdocs?
MkDocs is a fast, simple, and extensible static site generator geared specifically toward building project documentation. It relies heavily on Markdown files, which makes it incredibly accessible for developers who want to write docs as easily as they write code.
## Docker Compose Example
Running mkdocs with [material](https://squidfunk.github.io/mkdocs-material/)
theme and plugins built in.
> [!INFO]
> There is some setup of folders and things that are not automatic so
wont work straight out of the box.
```yaml
# mkdocs -- https://squidfunk.github.io/mkdocs-material/
version: '3'
services:
mkdocs:
container_name: 'mkdocs'
restart: unless-stopped
image: squidfunk/mkdocs-material
environment:
- PUID=1000
- PGID=1000
volumes:
#- /mnt/store/app/mkdocs/:/docs
- docs_nfs:/docs
stdin_open: true
tty: true
ports:
- "9896:8000"
volumes:
docs_nfs:
name: docs_nfs
driver_opts:
type: nfs
o: addr=truenas,nolock,soft,ro
device: :/mnt/store/vault/app/mkdocs
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -0,0 +1,161 @@
---
title: nginx-proxy-manager
description: nginx-proxy-manager
author: wompmacho
date: 2025-02-04
lastmod: 2025-02-04
showHero: false # needed to hide "hero banner"
---
## What is nginx-proxy-manager?
Nginx-proxy-manager is a simplified GUI for handling an nginx server
configuration. Nginx is a reverse proxy server.
A reverse proxy server is a type of proxy server that typically sits behind the
firewall in a private network and directs client requests to the appropriate
backend server. Nginx is a very common go-to. Nginx-proxy-manager is a nice gui
version that has some built in tools, like handling SSL Certificates with Let's
Encrypt. Nginix can provide load balancing, Web acceleration, Security and
anonymity for servers.
Personally I use nginx to proxy all my traffic to my dedicated servers so that I
do not have to expose local hosts via port forwarding. This also allows me to do
some extra encryption along the way and add additional security via access lists
where I see fit. I can also reuse ports, which saves a lot of time for
configurations.
## Docker Compose Example
```yaml
# nginx-proxy-manager - docker compose
---
version: "3.8"
services:
app:
container_name: nginx-proxy-manager
image: "jc21/nginx-proxy-manager:latest"
restart: unless-stopped
ports:
- "80:80"
- "81:81"
- "443:443"
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
```
Nginx gives you that great routing to your internal networked servers. Also
helps you set up your DNS both inside and outside the network. Can be a little
confusing at first.
First you will need a route you want to point to. In this case I will use this
site.
I want to be able to type in **wiki.wompmacho.com** and arrive here rather than
remembering my IP address and having to set up multiple complicated port
forwards in my router. Instead nginx allows me to open one port - 80 and direct
all traffic to nginx. Then based on some rules I have set up I can point that
traffic to my internal server. For this I will need a domain name server (DNS)
to point to my external IP.
### DNS
A domain name server (DNS) allows me to make a name that can be easily looked up
and point traffic to an Internet Protocol (IP) address that a server can easily
understand.
I can type in a name to a browser - my browser will ask my computers dns cache
_where is google.com_ and when not found - it will ask my router _where is
google.com_ which will then ask the dns server it is pointed to (typically your
ISP) who then asks the dns server the ISP is pointing to... and so on until
eventually one of the DNS servers contain the information about _google.com_.
Then it can retrieve the IP address of that server and send that information
back down the line - adding it to its own cache as it goes so that it does not
have to keep looking up this information. This will allow the browser to make
requests to that server directly.
In order to make my dns name known so that people can find it on the internet
easily, We will have to purchase the name from a host of an Authoritative DNS
server. An Authoritative DNS server will not cache the info, but instead act as
a primary resource of the configuration for a dns name so other dns servers can
ask for that resource.
In this example I have purchased **wompmacho.com** from cloudflare who operate
as a registrar and facilitates purchasing that name from a higher authoritative
registry. Allowing me to point my external IP address to this address.
Once I have a DNS name I can use my registar (cloudflare) to point that name to
my external IP address (my router's IP address).
> [!INFO] wompmacho.com <> 175.222.222.222
### Port forwarding
This traffic will then be requested from my router which _should_ be typically
set up to block incoming requests. In order to allow a request to my server
hosting my site I will need to open a port (80) and allow traffic through my
router's firewall to my docker container that is hosting nginx-proxy-manager.
Nginx will then redirect this again to my docker container for my site.
### A records
For my scenario my dns name is **wompmacho.com** but if I want to have multiple
sites at my IP address I will need to be able to differentiate them. To do this
I will use an A record. This allows me to split up my domain with multiple sub
domains.
- wiki.wompmacho.com
- **subdomain**.wompmacho.com
### Setting up a proxy
This will point traffic to the same domain (wompmacho.com) but based on the
sub-domain nginx will be able to direct and load balance traffic to my internal
server hosting the wiki - in this case also my docker container. The wiki is
hosted on a different port. We can point this proxy to that port.
![nginx_proxy_host_setup.png](nginx_proxy_host_setup.png#center)
### Cloudflare DNS Proxy
An example of a dns service is Cloudflare. I switched over to cloudflare when
google sold their awesome DNS. I have been loving it since the switch, there is
a lot of info out there on services they offer and how to set things up. The
biggest reason I switched over to cloudflare is their dns proxy. This allows my
home IP to be proxied behind cloudflare services - and helps hide my servers
location. This also allows me to utilize their services to block things like
botnet attacks.
![cloudflare_ssl_example.png](cloudflare_ssl_example.png#center)
#### SSL encryption
Secure Sockets Layer (SSL) is a security protocol that provides privacy,
authentication, and integrity to Internet communications. SSL eventually evolved
into Transport Layer Security (TLS). Using Nginx-proxy-manager we can connect
our cloudflare DNS to our nginx server using SSL encryption. This is what that
lock and **https** indicates on your browser - you are using a secured and
verified connection to the server. This helps stop man in the middle attacks
preventing people from spoofing the connection and spying on you.
![ssl_connection_lock.png](ssl_connection_lock.png#center)
We do this by adding a cloudflare certificate to nginx proxy manager and then
setting up our proxy host to use this certificate on the SSL tab.
![cloudflare_ssl_setup_example.png](cloudflare_ssl_setup_example.png#center)
![nginx_ssl_setup_example.png](nginx_ssl_setup_example.png#center)
> [!INFO] Note this is is only for a secure connection between **nginx <-> cloudflare**
The details page is referring to your internal setup - or where nginx should
point the dns to.
> [!INFO] **origin server <-> nginx**
Use https here only if you have ssl setup on your origin server and your server
is set up to accept https, otherwise you make get bad gateway 502 errors.
![nginx_ssl_internal_scheme_example.png](nginx_ssl_internal_scheme_example.png#center)

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

View File

@@ -0,0 +1,45 @@
---
title: portainer
description: Quick overview of portainer and setup
date: 2025-02-04
lastmod: 2025-02-04
author: wompmacho
showHero: false # needed to hide "hero banner"
---
## What is Portainer?
`Portainer` is a lightweight, powerful container management platform that provides a graphical user interface (GUI) to manage your Docker, Docker Swarm, and Kubernetes environments. It essentially sits on top of your container runtime, allowing you to control complex infrastructure without needing to master the command line.
Portainer provides a very easy to understand user interface for deploying other docker containers. The great thing is, Portainer
is a container itself, so it should run automatically following setup and allow
you a nice gui interface for your docker environment via a web browser. I
particularly love its dashboard because you get a great snapshot of your running
containers, can easily restart and monitor your containers, but most importantly
edit and deploy docker-compose files via the "stacks" page.
- Portainer CE is free version
- [Install
Guide](https://docs.portainer.io/start/install-ce/server/docker/linux)
- Create the volume that Portainer Server will use to store its database
``` bash
docker volume create portainer_data
```
- Download and install the Portainer Server container
``` bash
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
- Verify the container is running with docker ps
``` bash
root@server:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de5b28eb2fa9 portainer/portainer-ce:latest "/portainer" 2 weeks ago Up 9 days 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9443->9443/tcp, :::9443->9443/tcp portainer
```
- Navigate to https://HOST_IP_ADDRESS:9443 and create a user so you can log in
to the Portainer web interface.

View File

@@ -0,0 +1,73 @@
---
title: qBittorrent
description: Quick overview of qBittorrent and setup
date: 2025-02-04
lastmod: 2025-02-04
author: wompmacho
showHero: false # needed to hide "hero banner"
---
## What is qBittorrent?
`qBittorrent` is a awesome, simple app that allows you to use classic QBittorrent in a docker
container. I use QBittorrent because I can set it up with a VPN that only
connects via the container directly. Effectively separating it from the rest of
my network and allowing me to continue as normal while it is downloading. It
will automatically stop the network if the VPN is not functioning correctly.
## Docker Compose Example
``` yaml
# qbittorrentvpn - docker compose
# https://hub.docker.com/r/dyonr/qbittorrentvpn
---
version: "2"
services :
qbittorrentvpn:
container_name: qbittorrentvpn
privileged: true
image: dyonr/qbittorrentvpn
environment :
- VPN_ENABLED=true
- VPN_USERNAME=VPN_USERNAME
- VPN_PASSWORD=VPN_PASSWORD
- LAN_NETWORK=10.0.0.0/24
- WEBUI_PORT_ENV=8080
- INCOMING_PORT_ENV=8999
ports :
- 8080:8080
- 8999:8999
- 8999:8999/udp
volumes :
- /app/QBittorrent/config:/config
- /mnt/store/MediaServer/torrent:/downloads
restart: unless-stopped
```
To set up the VPN you will need to have an existing account with a VPN service.
Username & Password for the vpn will be provided as a key by your vpn service.
In my case I use Surfshark and have to go log into my account, navigate to the
linux setup page and grab my generated Username key and Password key there.
A credentials file on my docker host was generated by QBittorrent when running
the first time.
```
# download all availble server conf
sudo wget https://my.surfshark.com/vpn/api/v1/server/configurations
# cp the server you want to use into config folder
/app/QBittorrent/config/openvpn
```
Once you restart your qbittorrentvpn docker container you can test your vpn
service with a torrent leak test. Use the + add torrent link button to Download
the torrent and test that your VPN service is connected and working.
- [torrent-leak-test](https://bash.ws/torrent-leak-test)
### Magnet links
Use magnet link and item hash to avoid logins
```
magnet:?xt=urn:btih:${HASH}
```

View File

@@ -0,0 +1,33 @@
---
title: uptime-kuma
description: Quick overview of uptime-kuma and setup
date: 2025-02-04
lastmod: 2025-02-04
author: wompmacho
showHero: false # needed to hide "hero banner"
---
## What is uptime-kuma?
`uptime-kuma` is a neat little web monitoring application. Lotta dope things right
out of the box, very gui / user friendly. Pretty much just add the stack, update
the dir for config - and it works. Integrates with discord webhooks, great easy
status page and dashboard.
## Docker Compose Example
```yaml
# uptime-kuma - docker compose
---
# https://github.com/louislam/uptime-kuma/wiki/%F0%9F%94%A7-How-to-Install
version: '3.3'
services:
uptime-kuma:
container_name: uptime-kuma
image: louislam/uptime-kuma:1
volumes:
- /app/uptime-kuma/data:/app/data
ports:
- 3001:3001 # <Host Port>:<Container Port>
restart: always
```

View File

@@ -0,0 +1,42 @@
---
description: vaultwarden
author: wompmacho
date: 2025-02-23
lastmod: 2025-02-23
showHero: false # needed to hide "hero banner"
---
## What is vaultwarden?
`vaultwarden` ia a alternative server implementation of the Bitwarden Client API, written in
Rust and compatible with official Bitwarden clients, perfect for self-hosted
deployment where running the official resource-heavy service might not be ideal.
## Docker Compose Example
```
# vaultwarden -- https://github.com/dani-garcia/vaultwarden
---
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
environment:
DOMAIN: "https://vaultwarden.wompmacho.com"
ROCKET_PORT: 80
ROCKET_ENV: production
volumes:
- vaultwarden-mount:/data/
ports:
- '9998:80'
- '9999:443'
volumes:
vaultwarden-mount:
name: vaultwarden-mount
driver_opts:
type: nfs
o: addr=truenas,nolock,soft,rw
device: :/mnt/store/vault/app/vaultwarden
```

View File

@@ -0,0 +1,35 @@
---
title: webtop
description: Quick overview of webtop and setup
date: 2025-02-04
lastmod: 2025-02-04
author: wompmacho
showHero: false # needed to hide "hero banner"
---
## What is webtop?
`webtop` is a awesome mini linux env I can use as a secure remote web-client for my home
network.
## Docker Compose Example
``` yaml
# webtop - https://docs.linuxserver.io/images/docker-webtop/#lossless-mode
---
services:
webtop:
image: lscr.io/linuxserver/webtop:latest
container_name: webtop
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- TITLE=Webtop #optional
volumes:
- /app/webtop/data:/config
ports:
- 7978:3000
- 7979:3001
restart: unless-stopped
```

View File

@@ -0,0 +1,45 @@
---
title: wikijs
description: Quick overview of wikijs and setup
date: 2025-02-04
lastmod: 2025-02-04
author: wompmacho
showHero: false # needed to hide "hero banner"
---
## What is wikijs?
Wiki.js is a powerful, modern, and open-source wiki application built on Node.js. It is designed to be the central knowledge base for your home lab or professional projects, replacing traditional, clunky wiki platforms with a sleek, intuitive interface.
I like it because of the useful Markdown editor that lets you nicely organize
links, code etc. I can also backup the database to my NAS in nice MD files, so
nothing gets lost if something is corrupted.
## Docker Compose Example
``` yaml
# wikijs - docker compose
# https://github.com/linuxserver/docker-wikijs
---
version: "3.8"
services:
wikijs:
image: lscr.io/linuxserver/wikijs:latest
container_name: wikijs
environment:
- PUID=0
- PGID=0
- TZ=Etc/UTC
- DB_TYPE=sqlite #optional
- DB_HOST= #optional
- DB_PORT= #optional
- DB_NAME= #optional
- DB_USER= #optional
- DB_PASS= #optional
volumes:
- /app/wiki/config:/config
- /app/wiki/data:/data
ports:
- 3000:3000
restart: unless-stopped
```

172
content/resume.md Normal file
View File

@@ -0,0 +1,172 @@
---
title: Resume
description: job history & resume
author: wompmacho
date: 2024-07-12T02:40:50.769Z
lastmod: 2025-02-08
tags:
---
{{< rawhtml >}}
<table id="resume-table">
<!----------------------------------Experience--------------------------------->
<tr>
<td colspan="3" class="header header-google-red">Experience</td>
</tr>
<tr class="section-header">
<td colspan="2" class="role">Special Projects Lead (DT L3) - <span class="google-blue">G</span><span class="google-red">o</span><span class="google-yellow">o</span><span class="google-blue">g</span><span class="google-green">l</span><span class="google-red">e</span></td>
<td class="dates">October 2024 - <span class="google-green">Current</span></td>
</tr>
<tr>
<td colspan="3">
<p>
HwOps Special Projects Lead; Resolve technical incidents and escalations by performing analysis utilizing existing data models or leveraging custom built data infrastructure to formulate and interpret data to reach specific conclusions and next steps. Develop detailed reports and intuitive dashboards, communicating key insights for data driven analysis. File bugs against products, documentation, and procedures by documenting desired behavior or steps to reproduce, and driving bugs to resolution. Suggest code-level resolutions for complex issues by leveraging tools, tool development and effective communication with stakeholders. Identify opportunities to build or enable solutions that improve, support or empower OMs, Site Leads & DTs to solve issues by using self-service tools and documentation. Fostered team growth through mentorship, training course facilitation, collaboration with internal training teams, and technical writing development.
</p>
</td>
</tr>
<tr class="section-header">
<td colspan="2" class="role">Data Center Technician (DT L2) - <span class="google-blue">G</span><span class="google-red">o</span><span class="google-yellow">o</span><span class="google-blue">g</span><span class="google-green">l</span><span class="google-red">e</span></td>
<td class="dates">July 2023 - October 2024</td>
</tr>
<tr>
<td colspan="3">
<p>
Site Operations hardware maintenance and networking, resolving critical issues and collaborating cross-functionally to address SLO deviations. Built and led an internal escalation team for weekend/holiday support, creating resources and onboarding leaders. Developed and documented new processes, championed project documentation, and contributed to technician hiring, onboarding, and training. Mentored Googlers and facilitated training programs. Transitioned into Leader Role as a Maintenance Lead / Escalation Point of Contact.
</p>
</td>
</tr>
<!---------------------------------Consulting---------------------------------->
<tr>
<td colspan="3" class="header header-google-yellow">Consulting / Freelance / Helping out Family</td>
</tr>
<tr class="section-header">
<td colspan="2" class="role">Porch Light Properties LLC</td>
<td class="dates">Jun 2020 - Jul 2023</td>
</tr>
<tr>
<td colspan="3">
<p>
Developed and implemented long-term systems, development, and planning strategies, including rebranding initiatives. Served as Hiring Manager, overseeing onboarding, system administration, and policy management. Managed social media, website development/design, SEO, and marketing campaigns (including Facebook Ads). Utilized Google Analytics and oversaw technology/security initiatives and traditional marketing.
</p>
</td>
</tr>
<tr class="section-header">
<td colspan="2" class="role">Videography, Photography, Film Media, Drone Services</td>
<td class="dates">Jun 2020 - Jan 2023</td>
</tr>
<tr>
<td colspan="3">
<div class="iframe-wrapper" style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<iframe width="560" height="315" src="https://www.youtube.com/embed/-DN8mhOxeKQ?si=jdVsuQoCPdWjUjBp" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
</td>
</tr>
<tr class="section-header">
<td colspan="2" class="role">Tabora Farm and Winery</td>
<td class="dates">Dec 2019 - May 2020</td>
</tr>
<tr>
<td colspan="3">
<p>
Managed social media, website development/design, and SEO. Oversaw tax and licensing compliance for interstate wine shipments.
</p>
</td>
</tr>
<!---------------------------------Freelance----------------------------------->
<!-- <tr>
<td colspan="3" class="header header-google-green">Freelance</td>
</tr> -->
<!---------------------------------JOBs---------------------------------------->
<tr>
<td colspan="3" class="header header-google-blue">Work Experience Continued</td>
</tr>
<tr class="section-header">
<td colspan="2" class="role">Operations Engineer - <span class="twitter-blue">Twitter</span> Inc.</td>
<td class="dates">Apr 2015 - Sep 2017</td>
</tr>
<tr>
<td colspan="3">
<p>
Led site operations teams and provided on-call support for multiple data centers/POPs, consistently exceeding SLOs goals. Managed, hired, onboarded, and trained operations engineers and staff. Served as a Twitter liaison and brand ambassador, reporting on emerging technologies. Provided on-site support to engineering teams, proactively monitored services, and managed projects including repairs, decommissioning, upgrades, installations, networking, and maintenance.
</p>
</td>
</tr>
<tr class="section-header">
<td colspan="2" class="role">Operations Technician (OTA/OA) - <span class="google-blue">G</span><span class="google-red">o</span><span class="google-yellow">o</span><span class="google-blue">g</span><span class="google-green">l</span><span class="google-red">e</span></td>
<td class="dates">Nov 2012 - Mar 2015</td>
</tr>
<tr>
<td colspan="3">
<p>
Supported multiple sites on critical infrastructure projects, including server repairs, hardware qualifications, QA, NPI, HAT, disk sanitization, project management, decommissioning, upgrades, and backup library maintenance. Collaborated effectively to maintain Google's infrastructure and ensure operational excellence.
</p>
</td>
</tr>
<!---------------------------------Education----------------------------------->
<tr>
<td colspan="3" class="header header-google-red">Education</td>
</tr>
<tr class="section-header">
<td colspan="2" class="">Pennridge High School</td>
<td class="dates">2004 - 2009</td>
</tr>
<tr class="section-header">
<td colspan="2" class="">Georgia State University - Computer Science</td>
<td class="dates">2012 - 2017</td>
</tr>
<!---------------------------------Skills & CERTS------------------------------>
<tr>
<td colspan="2" class="header header-google-yellow">Skills</td>
<td colspan="1" class="header header-google-green">Certificates</td>
</tr>
<tr>
<td>
<ul>
<li>Leadership, Mentoring</li>
<li>UNIX / Linux / OS</li>
<li>Networking, TCP/IP, DNS, DHCP</li>
<li>Technical Writing & Documentation</li>
<li>SQL, HTML, CSS, JS</li>
<li>Java, Golang, Shell Scripting</li>
<li>Docker, VMs, Baremetal</li>
</ul>
</td>
<td>
</td>
<td>
<ul>
<li>LPIC-1 - Linux Professional Institute</li>
<li>SUSE Certified Linux Administrator (SUSE CLA)</li>
<li>Small Unmanned Aircraft System (Part 107)</li>
</ul>
</td>
</tr>
</table>
<div class="center">
{{< button href="me/2025_Resume_Michael_Braicu.pdf" target="_self" >}}
PDF Download
{{< /button >}}
</div>
{{< /rawhtml >}}

8
content/search.md Normal file
View File

@@ -0,0 +1,8 @@
---
title: "Search"
layout: "search"
summary: "search"
placeholder: "Search the wiki..."
---

4
content/stream/_index.md Normal file
View File

@@ -0,0 +1,4 @@
---
draft: true
title: 'Stream'
---

View File

@@ -0,0 +1,71 @@
---
description: stream stuffs
date: 2024-07-12T03:11:33.093Z
lastmod: 2025-02-02
#tags: kick, livestream, twitch, youtube
author: wompmacho
title: Gear
---
this has changed a bit... I will update later...
## Camera Gear
- [Sony Alpha a6400](https://amzn.to/38FDUjk)
- [Sigma 16mm f/1.4](https://amzn.to/3LzPKdj)
- [Gonine AC-PW20 NP-FW50 Dummy Battery](https://amzn.to/3vAaz30)
- [Elgato Cam Link 4K](https://amzn.to/3FdqOWU)
- [Pixel Desk Camera Mount Stand](https://amzn.to/3F5XG3G)
- [Quick Release Plate Camera Tripod Mount](https://amzn.to/3KGBSwU)
- [Micro HDMI to HDMI Adapter Cable](https://amzn.to/38DV2pG)
## Lighting
- [GVM 1000D RGB Led Video Light](https://amzn.to/3vHKCP7)
## Audio
- [Shure SM7b - Vocal Dynamic Microphone](https://amzn.to/3CCxdd0)
- [Shure SM57-LCE Cardioid Dynamic Mic](https://amzn.to/3F60duI)
- [Shure A2WS-BLK - pop filter](https://amzn.to/3krKGMm)
- [PreSonus Revelator io24](https://amzn.to/3P17eBI)
- [Mic Arm Desk Mount](https://amzn.to/3wTBSFN)
- [Brainwavz XL Micro Suede Memory Foam Earpads](https://amzn.to/3kvMDqY)
## IRL Setup
- [Alex Tech 10ft - 1/2 inch Cord Protector](https://amzn.to/3LFExYP)
- [SIM Card Adapter Nano Micro](https://amzn.to/3F3i2um)
- [NDI HDMI Encoder, TBS2603SE NDI](https://amzn.to/3KsQlwc)
- [USB to DC Convert Cable](https://amzn.to/373VBsm)
- [Cudy N300 WiFi Unlocked 4G LTE](https://amzn.to/38DHUAT)
- [Sony FDR-X1000V/W 4K Action Cam](https://amzn.to/3KAVDpg)
- [Backpack Shoulder Chest Strap Clip Mount](https://amzn.to/3F44HBY)
- [Bicycle & Motorcycle Phone Mount](https://amzn.to/38G679C)
- [Portable Charger Power Bank](https://amzn.to/3Kx4M2f)
## Game PC
- [WD_BLACK 2TB SN850 NVMe](https://amzn.to/3vwPvKu)
- [Corsair Vengeance LPX 32GB - Mem](https://amzn.to/3LCr6ZA)
- [Corsair Carbide Series Air 740](https://amzn.to/3F4dzHT)
- [CORSAIR Hydro Series H115i](https://amzn.to/3kveuaR)
- [ASUS ROG STRIX GeForce GTX 1080](https://amzn.to/3LCsFGW)
- [Intel Core i7-7700K](https://amzn.to/3N0NY5B)
- [Asus Z170-A - MOBO](https://amzn.to/3LHCVhq)
## Peripheral
- [Logitech G Pro Wireless Gaming Mouse](https://amzn.to/3vuC3H3)
- [Acer Predator XB272 bmiprz 27"](https://amzn.to/3LD5uMJ)
- [CORSAIR K70 RGB MK.2 RAPIDFIRE](https://amzn.to/38J3Zhi)
- [Speakers - PreSonus Eris](https://amzn.to/3KAwAD1)
- [Single Monitor Desk Mount - Adjustable Gas Spring](https://amzn.to/3LBhluO)
## Stream PC
- [G.SKILL TridentZ Series 16GB](https://amzn.to/3kvPMHp)
- [Elgato Stream Deck](https://amzn.to/3Fd4WuD)
- [AMD YD180XBCAEWOF Ryzen 7 1800X](https://amzn.to/3F468Am)
- [Nvidia GeForce GTX 1080](https://amzn.to/3F468Am)
- [ASUS Prime X370-Pro - MOBO](https://amzn.to/3s65gG7)

View File

@@ -0,0 +1,77 @@
---
title: OBS Settings
description: OBS & Live Stream Settings
author: wompmacho
date: 2024-01-25T22:28:32.943Z
lastmod: 2025-02-04
#tags: twitch
---
## OBS
### Stream Settings
```
Ignore stream service settings
Video Encoder
x264
1920x1080
Rate control
CBR
Bitrate
8000 Kbps
Keyframe Interval
1s
CPU Usage
Medium
Profile
None
Tune
None
x264 Options
keyinit=90
```
### Recording Settings
```
Recoding Format
.mkv
Audio Track
All
Automatic File Spliting
Split Time
240 min
```
### Video
```
Common FPS Values
60
```
### Advanced
```
Recording Filename Formatting
%MM-%DD-%CCYY_%A_%hh-%mm-%p_%FPS
Automatically Remux to MP4
```
## Camera
```
Sony A6400
Resolution for stream
3840x2160
Specs
Max Resolution
6000 x 4000
Image Ratio
1:1, 3:2, 16:9
Sensor
25 megapixels
CMOS
APS-C (23.5 x 15.6 mm)
ISO
Auto, 100-32000 (expands to 102800)
```