Can you not use the same keys for multiple devices like you’d normally be able to?
Can you not use the same keys for multiple devices like you’d normally be able to?
Truly one of the most embarassing things I have ever seen someone share publically.
Over polite comments responding to an opinion about a video game.
For me it has always just defaulted to the left-most monitor. I had a script that would disable that monitor with xrandr when sddm loaded and then re-enable it on logon, but I couldn’t get something similar working in Wayland.
Avoid local retail in favor of what, a website? If you’re concerned about the data mining potential of this robot rolling around a strip mall then you should avoid the internet at all costs.
What you’re describing completely defeats the purpose of the inspections (trying to catch someone in the act of hacking them, somehow) and they were scheduled. Also, you have only replied to me on this post.
You seem very confused.
When did I say anything about anyone having sex? What?
They’re doing visual inspections of rooms because they don’t trust the scary hacker people in them. What do you think telling them you’re in the room is going to accomplish?
Did you even read the article?
I mean sure if you wedge the door or something, but then you’re just going to get kicked out.
Every hotel with those has a tool they can use to easily unlatch that lock.
What is sketchy about downloading a torrent that it could save you from? Wouldn’t it be executing whatever you downloaded on another machine that would be the risky part?
How would a thinking emoji make it clear your question isn’t serious? Also, things have been available for a limited time long before phishing attempts were a thing, and will continue to exist for legitimate purposes long after. You can’t expect the entire rest of the world to stop doing something innocuous just because it’s also used as a tactic to fool a small subset of inattentive people.
I was also using it then and most people hated it at the time. People generally saw it as useless DRM they had to install to play the games they had already paid for. The biggest complaint was needing to install it and register Counter-Strike to get the 1.6 update.
This gif comes to mind:
If you’re going to be looking at network requests on this granular of a level you should use something like OpenSnitch so you can be sure what is actually generating them.
lol I would open every port on my router and route them all to wireguard before I would ever consider doing this
If you’re not completely giving up on privacy I would avoid cloudflare. I just run an always-on wireguard tunnel that routes back to my home network from my wife’s and my phones, and that kills like 3 birds with one stone (phone traffic is encrypted and hidden from my carrier, home server is accessible, and ads are blocked via DNS).
I use notifications in Thunder and I’ve had no issues. I haven’t compared the difference or anything, but when I’ve happened to check battery usage it’s always been a reasonable amount for how much I’ve used it that day. It does generate a decent amount of network traffic since it’s regularly checking with you instance for it, and that traffic is generated for each account you have reaching out to each instance. That should be how any FOSS app works though, the alternative would be something like Sync where you pay to have actual pushes sent from their server.
My theory is that the RTSP port (554) is for streaming and that when I go to the local address (that is on 80), the site ITSELF initiates a connection to port 554 in the background. However, this apparently does not happen when I connect remotely.
I think you’re on the right track here. The DVR is probably telling your browser to connect to http://192.168.1.222:554 for the stream, which on LAN is fine because you have a route to 192.168.1.222, but when connecting externally you won’t be able to get to 192.168.1.222.
You can probably check the network connections in dev tools in the browser to confirm that.
Edit: Editing this to also stress the importance of the advice given by @SteveTech@programming.dev. My home cameras are also only accessible from outside my network via wireguard.
I use Nextcloud with Nginx Proxy Manager and just use NPM to handle the reverse proxy, nothing in Nextcloud other than adding the domain to the config so it’s trusted.
I use Plex instead of Jellyfin, but I stream it through NPM with no issues. I can’t speak to the tunnel though, I prefer a simple wireguard tunnel for anything external so I’ve never tried it.
Edit: unless that’s what you mean by tunnel, I was assuming you meant traefik or tailscale or one of the other solutions I see posted more often, but I think one or both of those use wireguard under the hood.
The issue is that the docker container will still be running as the LXC’s root user even if you specify another user to run as in the docker compose file or run command, and if root doesn’t have access to the dir the container will always fail.
The solution to this is to remap the unprivileged LXC’s root user to a user on the Proxmox host that has access to the dir using the LXC’s config file, mount the container’s filesystem using pct mount, and then chown everything in the container owned by the default root mapped user (100000).
These are the commands I use for this:
find /var/lib/lxc/xxx/rootfs -user 100000 -type f -exec chown username {} +;
find /var/lib/lxc/xxx/rootfs -user 100000 -type d -exec chown username {} +;
find /var/lib/lxc/xxx/rootfs -user 100000 -type l -exec chown -h username {} +;
find /var/lib/lxc/xxx/rootfs -group 100000 -type f -exec chown :username {} +;
find /var/lib/lxc/xxx/rootfs -group 100000 -type d -exec chown :username {} +;
find /var/lib/lxc/xxx/rootfs -group 100000 -type l -exec chown -h :username {} +
(Replace xxx with the LXC number and username with the host user/UID)
If group permissions are involved you’ll also have to map those groups in the LXC config, create them in the LXC with the corresponding GIDs, add them as supplementary groups to the root user in the LXC, and then add them to the docker compose yaml using group_add.
It’s super confusing and annoying but this is the workflow I’m using now to avoid having to have any resources tied up in VMs unnecessarily.
Well sure, but you effectively still have the same 5-connection limit as long as you manage your keys correctly.