Home Network Video Recorder System

A self-hosted NVR project built around Frigate, PoE cameras, reverse proxy access, and an isolated camera network using a second NIC on my homelab server.

Project Overview

This project was built to create a local-first home camera system that does not rely on vendor cloud services for day-to-day use. I wanted a setup that would let me monitor my driveway and entry area, record footage locally, and access the system securely through my own network while away from home.

The final system uses Reolink PoE cameras, Frigate running in an Ubuntu virtual machine, OPNsense for routing and VPN access, and NGINX with Unbound DNS for clean internal URLs. Instead of placing the cameras directly on my normal LAN, I used a second network interface card on my homelab server to create a separate camera-only network.

Goals

The main goals of this build were simple: keep the cameras off the main network, store video locally, avoid exposing the cameras directly to the internet, and create a clean way to access Frigate from both inside the house and remotely through VPN.

  • Run a local NVR with Frigate
  • Use PoE cameras for simple cabling and reliable power
  • Keep cameras isolated from the main LAN
  • Store footage on dedicated VM-attached storage
  • Access Frigate through a clean internal domain name
  • Use VPN through OPNsense for remote viewing instead of public exposure

Hardware & Core Components

The NVR runs on my homelab server in Hyper-V. Frigate is hosted in an Ubuntu Server VM, while the cameras connect to a small TP-Link PoE switch. Rather than relying on my bridged router and main LAN path for the camera side of the project, I used a second physical NIC on the homelab server to create a separate network dedicated to the cameras.

  • Homelab host: Windows system running Hyper-V
  • Frigate host: Ubuntu Server virtual machine
  • Cameras: Reolink RLC-520A PoE turret cameras
  • PoE switch: TP-Link TL-SF1005P
  • Firewall / router: OPNsense
  • Internal DNS: Unbound DNS on OPNsense
  • Reverse proxy: NGINX on OPNsense

Network Design

One of the most important parts of this project was keeping the cameras separated from the normal home network. My OPNsense box only has two physical ports in use for WAN and LAN, so instead of adding a managed switch and building out VLANs right away, I used the second NIC on my homelab server to physically isolate the cameras.

The Frigate VM uses two network paths:

  • eth0: Main LAN access for the web interface and normal management
  • eth1: Camera-only network connected to the PoE switch

The cameras were moved onto a dedicated 192.168.30.0/24 network. The Frigate VM was configured with 192.168.30.1 on the second interface, and each camera was manually moved over to that subnet with static addresses.

  • Driveway camera: 192.168.30.102
  • Front door camera: 192.168.30.101

To prevent the camera network from routing anywhere else, IP forwarding was disabled on the Ubuntu VM. This means the cameras can talk to Frigate, but they do not have a path out to the internet or back into the normal LAN through the VM.

Diagram of the Frigate NVR network layout with OPNsense, homelab server, second NIC, PoE switch, and cameras.
Example network diagram showing the camera-only network and Frigate VM layout.

Frigate VM Build

Frigate was installed inside its own Ubuntu VM so the NVR system would remain separated from my other services. I created the VM in Hyper-V, installed Docker and Docker Compose, and then deployed Frigate from a compose file. During setup, Frigate was eventually changed to use host networking so it could talk cleanly to both NICs from inside the VM.

I started by testing Frigate with a minimal config, then added the first camera once RTSP access was confirmed. After that, I expanded the config to support both the low-resolution detection stream and the full-resolution recording stream.

  • Ubuntu Server VM dedicated to Frigate
  • Docker-based deployment
  • Host networking for clean access to both LAN and camera subnet
  • go2rtc restreaming used inside Frigate for stable RTSP handling

Camera Setup & RTSP Configuration

Each Reolink camera was initially configured on the normal network so it could be accessed through the browser and app. From there, I created user credentials, confirmed RTSP functionality, set stream settings, and then moved the cameras over to the isolated 192.168.30.x network.

Frigate was configured to use the substream for detection and the main stream for recording. This keeps detection efficient while still saving better quality video.

  • Main stream: full-resolution recording stream
  • Substream: lower-resolution stream for detection
  • Protocol: RTSP via go2rtc
  • Object tracking: currently focused on people and vehicles

Storage & Recording

The Frigate VM originally had only a small OS disk, so I created and attached a separate virtual hard disk for video storage. Inside Ubuntu, the new disk was formatted, mounted at /mnt/frigate, and mapped into the Frigate container so recordings and clips would not fill the operating system drive.

Recording retention was then configured inside Frigate so the system saves useful footage while remaining manageable on the current storage allocation. This makes it easy to expand later when I dedicate more disk space to the NVR.

  • OS disk: kept separate from recordings
  • Recording mount point: /mnt/frigate
  • Container mapping: /media/frigate
  • Frigate folders created: recordings, clips, exports

Internal DNS & Reverse Proxy

Once Frigate was working by IP and port, I cleaned up access by adding internal DNS entries in Unbound on OPNsense and then placing NGINX in front of the service. This allows me to access Frigate using a cleaner internal address instead of remembering the raw port number.

I configured the reverse proxy so Frigate is available at:

frigate.home.arpa

I also created a similar internal route for LubeLogger. This made the services easier to use from desktops, laptops, and my phone while connected through VPN.

Remote Access

Remote access is handled through a VPN connection to OPNsense rather than exposing the cameras or Frigate directly to the internet. This keeps the camera side of the project private while still allowing me to view live feeds and recorded clips from my phone when away from home.

This design keeps the attack surface small and lets internal DNS and reverse proxy rules continue working the same way whether I am on-site or connected through VPN.

Why I Built It This Way

A lot of consumer camera systems push cloud-based access and broad network exposure by default. I wanted something more controlled. By using a second NIC on the homelab server, isolating the cameras on their own subnet, keeping recordings local, and accessing everything through OPNsense and VPN, I ended up with a design that is far cleaner and more secure than simply dropping cameras onto the normal home network.

This project also gives me a solid foundation to build on later, whether that means adding more cameras, refining Frigate object detection, expanding storage, or integrating the system into a larger home lab automation stack.

Future Improvements

The current build already works well, but it also leaves room for later upgrades. Possible next steps include adding more storage, refining object detection and zones, adding another camera, and integrating Frigate with other self-hosted tools for more advanced automation.