Given this thread will probably attract other Unifi users... has anyone had success migrating from MongoDB to something like FerretDB?
I played around with getting this to work a few weeks ago and found that day-to-day it works without issue, but restoring a backup will error since it relies on some unsupported Mongo semantics (renaming collections iirc).
So you'd have one services that can provision Ubiquity, MikroTik, TPLink and other APs and manage the clients.
This seems like an odd misunderstanding, especially because the correct inversion “UBNT” is the default login name for most UniFi web UIs.
You might have a bit of dyslexia, OP!
I found https://community.home-assistant.io/t/unifi-cameras-without-... in which someone sshed in, edited some config files by hand, and got streaming to work for the current boot. One could probably take that a bit further and, you know, save the config to flash. But it'd be nice to just do it the way their controller does and know it's going to work for future firmware updates and such.
They also stream by connecting to your NVR with modified version of flv, rather than you connecting to them with RTSP, which is annoying but can be worked around.
We get this a lot at my job, where many customers' admins block s3 buckets by default. We give our customers a list of hostnames to allowlist and if they can't figure it out, that's on them.
> "TNBU" is "UNBT" backwards
TNBU is clearly NOT uNbt backwards.
However, there are other approaches. A public IP per client isn't going to be nearly as expensive as a VM per client, and lets you route your clients by target. Or you could route by source IP: either by having the client register their IPs, or with some combination with seeing where folk log in from.
Neither is necessary, though, given inspection does appear to work.
I wonder if there's a way to control routing client side and remove the list of mac addresses. Eg manage DNS for customers (upsell ad blocking!) and CNAME the unifi entry to a customer specific vhost.
Newly-registered domains are not generally an issue with enterprise users. However, they are overrepresented in malicious traffic due to domain-generation algorithms (DGAs).
it would be great if that could be in the article in the first place. (I'm assuming you are the author)
Having the client register their IPs isn’t tenable for most folks. What’s my IP at the shop? (No idea) Will it change? (Yes) now it’s broken.
Seeing where folks log in from isn’t nearly the same as where their UniFi networks are located. (Store vs home.) Broken.
So neither of the those are robust approaches whereas the author’s solution is bulletproof and simply works in all cases.
No offense, but why suggest “other approaches” that have such major holes? Why not just cheer on the solution that works all the time?
Last time I've tried, it was not supported by any open source solution.
Setting where it sends the video stream.
Configuring video settings, zone detections, etc. I found a video going through them here: <https://youtu.be/URam5XSFzuM?si=8WK4Yghh9kidZe6c&t=279> Just about any other camera lets you change this stuff through the camera's built-in web interface and/or ONVIF. Ubiquitis apparently don't.
> Otherwise it's just a device on your network that you can configure Frigate etc. to connect to and pull streams.
No, it connects to you!
I think it should even be possible to get seamless roaming between Unifi and OpenWrt with correct configuration of hostapd.
I think newer models like g4 flex dont support this thou.
I did that for 5 different cameras yesterday, you're saying Unifi's cameras doesn't allow user management? That sucks!
> No, it connects to you!
I thought frigate connects to the camera's RTSP stream (maybe with ONVIF in the mix)?
For the adoption stage, UniFi cameras broadcast on UDP port 10001 using a proprietary TLV (Type-Length-Value) protocol. The Protect console listens on this port and picks up new cameras immediately. 4 bytes `\x01\x00\x00\x00` sent as UDP broadcast to `255.255.255.255:10001`
The response then contains these fields:
| Hex Code | Field | Data |
|----------|-------|------|
| `0x01` | MAC Address | 6-byte hardware address |
| `0x02` | MAC + IP | Combined MAC and IPv4 address |
| `0x03` | Firmware Version | String |
| `0x0B` | Hostname | String |
| `0x0C` | Platform (Short Model) | String |
| `0x0A` | Uptime | 64-bit integer |
| `0x13` | Serial | String |
| `0x14` | Model (Full) | String |
| `0x17` | Is Default | Boolean (adopted vs unmanaged) |
After discovery, the Protect console:
1. Connects to the camera via SSH (default credentials)
2. Configures the Inform URL (TCP 8080)
3. Camera registers with the controllerSo conceivably at step 2 you could use your own modified URL to point to your own NVR and then grab the FLV streams from there.
Right, that's the expectation of Frigate, my own Moonfire NVR, and basically every other NVR out there. Ubiquiti decided to think different.
> 1. Connects to the camera via SSH (default credentials) 2. Configures the Inform URL (TCP 8080)
Not what I expected but okay. Looks like there's a `set-inform` command. It looks like it opens a TLS connection, doesn't check the certificate, and tries to opens a websocket:
GET /camera/1.0/ws HTTP/1.1
Pragma: no-cache
Cache-Control: no-cache
Host: ...
Origin: http://ws_camera_proto_secure_transfer
Upgrade: websocket
Connection: close, Upgrade
Sec-WebSocket-Key: ...
Sec-WebSocket-Protocol: secure_transfer
Sec-WebSocket-Version: 13
Camera-MAC: ...
Camera-IP: ...
Camera-Model: 0xa601
Camera-Firmware: 5.0.83
Device-ID: ...
Adopted: false
x-guid: be9d8e45-62a8-ae84-8b23-71723c7decaf
I might try accepting the websocket but I have a feeling I'll get stuck about there without knowing what the server is supposed to send over it. I'm debating if I'm willing to buy a Unifi Protect device or not....then again I did a search for a couple strings and ran across https://github.com/keshavdv/unifi-cam-proxy . It's the opposite direction of what I want (makes a standard camera work with Unifi Protect) but maybe contains the protocol details I'm looking for...
Actually, yes. I got lazy and just asked Claude Code to write a server, using that as a reference...and it worked. It was able to change the password and have it start streaming flv video. Not exactly a production-quality implementation but as a proof-of-concept it's quite successful.
Honestly it might be less work than some other cameras that (allegedly) speak RTSP. You'd be shocked how low-quality these implementations are. Never advancing timestamps, setting the RTP MARK bit arbitrarily, writing uninitialized memory framed as audio packets (on cameras that don't have microphones), closing file descriptors then writing data to them anyway (and so having it show up on the next accepted connection to be assigned that fd even pre-auth), etc.
A few years ago I ran a small UniFi hosting service. Managed cloud controllers for MSPs and IT shops who didn't want to run their own. Every customer got their own VPS running a dedicated controller.
The product worked. People wanted hosted controllers, mostly so they didn't have to deal with hardware, port forwarding, backups. The problem was the economics.
Each customer needed their own VPS. DigitalOcean droplets ran $4-6/month. I was charging $7-8. That's $1-2 of margin per customer, and any support request at all wiped it out. I was essentially volunteering.
The obvious fix is multi-tenancy: put multiple controllers on shared infrastructure instead of giving every customer their own VM. But UniFi controllers aren't multi-tenant. Each one is its own isolated instance with its own database and port bindings. You need a routing layer, something in front that can look at incoming traffic and figure out which customer it belongs to.
For the web UI on port 8443, that's easy. Subdomain per customer behind a reverse proxy, nothing special. But the inform protocol on port 8080 is where things get interesting.
Every UniFi device (access points, switches, gateways) phones home to its controller. An HTTP POST to port 8080 every 10 seconds. This is how the controller keeps track of everything: device stats, config sync, firmware versions, client counts.
The payload is AES-128-CBC encrypted. So I assumed you'd need per-device encryption keys to do anything useful with the traffic, which would mean you'd need the controller's database, which would mean you're back to one instance per customer.
Then I looked at the raw bytes.
The first 40 bytes of every inform packet are unencrypted:
Offset Size Field
────── ───── ──────────────────────────
0 4B Magic: "TNBU" (0x544E4255)
4 4B Packet version (currently 0)
8 6B Device MAC address
14 2B Flags (encrypted, compressed, etc.)
16 2B AES IV length
18 16B AES IV
34 4B Data version
38 4B Payload length
42+ var Encrypted payload (AES-128-CBC)
Byte offset 8 is the device's MAC address, completely unencrypted.
On the wire it looks like this:
54 4E 42 55 # Magic: "TNBU"
00 00 00 00 # Version: 0
FC EC DA A1 # MAC: fc:ec:da:a1:b2:c3
B2 C3
01 00 # Flags
...
("TNBU" is "UNBT" backwards, presumably UniFi Broadcast Technology.)
The MAC is in the header because the controller needs to identify the device before decrypting. Encryption keys are per-device, assigned during adoption, so the controller has to know which device is talking before it can look up the right key. Not a security oversight, just a practical requirement. But it means you can route inform traffic without touching the encryption at all.
Extracting it is almost nothing:
header := make([]byte, 40)
if _, err := io.ReadFull(conn, header); err != nil {
return err
}
if string(header[0:4]) != "TNBU" {
return fmt.Errorf("not an inform packet")
}
mac := fmt.Sprintf("%02x:%02x:%02x:%02x:%02x:%02x",
header[8], header[9], header[10],
header[11], header[12], header[13])
Read 14 bytes and you know which device is talking. No decryption needed.
With the MAC in hand, routing is simple. Keep a table of which MAC belongs to which tenant, forward the whole packet (header and encrypted payload, untouched) to the right backend.
Device (MAC: aa:bb:cc:dd:ee:ff)
|
v
+-----------------------------------+
| |
| Inform Proxy |
| |
| Read MAC from bytes 8-13 |
| |
| Lookup: |
| aa:bb:cc:... -> tenant-7 |
| 11:22:33:... -> tenant-3 |
| fe:dc:ba:... -> tenant-12 |
| |
| Forward to correct backend |
| |
+-----------------------------------+
| | |
v v v
Tenant 7 Tenant 3 Tenant 12
The whole proxy is maybe 200 lines of Go with an in-memory MAC-to-tenant lookup table.
In practice, the proxy is mostly a fallback. Once a device is adopted, you point it at its tenant's subdomain (set-inform http://acme.tamarack.cloud:8080/inform) and after that, standard Host header routing handles it through normal ingress. The MAC-based routing catches edge cases like devices that haven't been reconfigured yet, or factory-reset devices re-adopting.
Inform is the hard one. The rest of the controller's ports are more straightforward:
| Port | Protocol | Purpose |
|---|---|---|
| 8080 | TCP/HTTP | Inform (device phone-home) |
| 8443 | TCP/HTTPS | Web UI and API |
| 3478 | UDP | STUN |
| 6789 | TCP | Speed test (internal) |
| 27117 | TCP | MongoDB (internal) |
| 10001 | UDP | L2 discovery (local only) |
Once I figured out inform, the rest was almost anticlimactic. 8443 is the web UI, so that's just subdomain-per-tenant with standard HTTPS ingress. 3478 (STUN) is stateless so a single shared coturn instance covers every tenant. The rest are either internal to the container or L2-only, so they never leave the host.
For the curious: the payload after byte 42 is AES-128-CBC. Freshly adopted devices use a default key (ba86f2bbe107c7c57eb5f2690775c712) which is publicly documented by Ubiquiti and ships in the controller source code. After adoption, the controller assigns a unique per-device key.
The decrypted payload contains device stats and configuration data. Interesting if you're building controller software, but irrelevant for routing.
Every tenant still gets their own dedicated controller, but you're not paying for a whole VM per customer anymore. What was a volunteering operation at $1-2 margin becomes something you can actually make money on.
None of it works if the MAC is inside the encrypted payload. You'd need per-device keys at the proxy layer, which means you'd need access to every controller's database, which puts you right back at one instance per customer. Six plaintext bytes in a packet header make the whole thing possible.
I don't think Ubiquiti designed it this way for third parties to build on. The MAC is there because the controller genuinely needs it before decryption. But the happy side effect is that the inform protocol is routable by anyone who can read 14 bytes off a TCP connection.
If you've poked at the inform protocol yourself, I'd like to hear about it. [email protected]