Yeah quicksync won’t help you there.
I thought nVidia’s limit was enforced by their drivers, but that’s probably changed since it’s been a while since I looked at nvenc as a solution (quicksync, then an ARC card over here).
Yeah quicksync won’t help you there.
I thought nVidia’s limit was enforced by their drivers, but that’s probably changed since it’s been a while since I looked at nvenc as a solution (quicksync, then an ARC card over here).
dd then resize the fs?
Edit: one caveat here I forgot: if your fstab is using UUIDs, you’re going to have to update that, since the new drive won’t be the same UUID because, well, it’s not the same drive.
If you have an Intel CPU with quicksync, it will likely perform better than the 1060 in terms of visual quality, if its coffee lake or newer (8th gen).
If not, well, it’ll be fine up to whatever the stream limit is (4?).
Fair, but he said he wants to move from Windows to Linux, so I just assumed there wasn’t going to be any of those since, well, they’re not going to run in Linux anyways.
Not in a way you’re probably going to like.
You could set up a bare metal hypervisor on the system and set up a VM for your NAS, Windows, and Linux and swap between them as needed, but uh, that’s not really an exceedingly pleasant desktop use case, for a number of reasons, one of which is that you really won’t have the normal ‘sit down, and use the computer’ desktop experience.
Alternate option: run the NAS and either the Linux or Windows install in a VM, and keep it booted into, say, the desktop Linux environment with everything else being a virtualized setup.
Since android apps are required, I’d maybe go about this another way: find the app you like the most, then stand up whatever backend it uses for sync.
I was already in the FreshRSS ecosystem, but man, I don’t really like any of the android apps on offer, but swapping at this point would be annoying (bookmarks, saved stories, etc.)
good ideia to run restic as root
As a general rule, run absolutely nothing as root unless there’s absolutely no other way to do what you’re trying to do. And, frankly, there’s maybe a dozen things that must be root, at most.
One of the biggest hardening things you can do for yourself is to always, always run everything as the lowest privilege level you can to accomplish what you need.
If all your data is owned by a user, run the backup tool as that user.
If it’s owned by several non-priviliged users, then you want to make sure that the group permissions let you access it.
As a related note, this also applies to containers and software you’re running: you shouldn’t run docker containers as root unless they specifically MUST have a permission that only root has, and I personally don’t run internet facing ones as the same user as all the others: if something gets popped, then they not only do not have root permissions, but they’re also siloed into their own data in the event of a container escape.
My expectation is that, at some point, I’ll miss a CVE and get pwnt, so the goal is to reduce how much damage someone can do when that happens, rather than assume I’m going to be able to keep it from happening at all, so everything is focused on ‘once this is compromised, how can i make the compromise useless to the attacker’.
Unifi Gateway Ultra
How have you liked the gateway? Any stupid decisions that have annoyed?
My USG has decided that, after a decade, it’s going to be flaky and crash if it wants to (even after replacing it’s 4th dead PSU and 2nd USB stick) and I’m thinking it’s probably time to upgrade.
I’ll admit to both liking the Unifi ecosystem and firmly not trusting the Unifi ecosystem one damn bit, which is bit of a weird situation where I’ve been really really unwilling to upgrade anything because that hasn’t always gone uh, smoothly.
Also if you’ve never seen it, lazydocker might be something up your alley.
It’s a TUI, but it provides easy access to docker containers, logs, updating/restarting/stopping/etc them and so on.
Also, if you like htop, youre going to love btop.
take a few extra taps and swipes than they would on Android
I’ve swapped from iOS to Android and I very much have the opposite experience.
Everything in Android feels just a little bit like someone somewhere went ‘well we have to put this option SOMEWHERE’ and just shoved it in, which leads to me fiddling in apps and system settings a lot more than I was on iOS.
I’m happy to chalk it up to much more experience in iOS than modern Android, but it’s been kinda a pervasive experience.
And, also related and annoying: googling ‘how do I change a thing’ routinely makes me nuts because how you do something seems to vary from manufacturer to manufacturer and even like, model to model.
I guess it’s just… maybe iOS needs more button presses, but Android is utterly inconsistent as to where something might be which means you spend a little more time digging for a specific thing than you might on iOS which leads to the impression that you’re hitting a lot more buttons to do something, even if maybe the actual number of presses would be lower if you knew exactly how to do it.
Give me a new version of the 5c, but use the G3 iMac colors as your color options, including a transparent look into the guts of the phone.
Would buy at least one.
You could also use nginx if you wanted; it’ll do arbitrary tcp data with the stream plugin.
Yeah DNS is, in general, just goofy and weird and a lot of the interactions I wouldn’t expect someone who’s done it for years to necessarily know.
And besides, the round-robin thing is my favorite weird DNS fact so any excuse to share it is great.
I mean, recovery from parity data is how all of this works, this just doesn’t require you to have a controller, use a specific filesystem, have matching sized drives or anything else. Recovery is mostly like any other raid option I’ve ever used.
The only drawback is that the parity data is mostly equivalent in size to the actual data you’re making parity data of, and you need to keep a couple copies of indexes since if you lose the index or the parity data, no recovery for you.
In my case, I didn’t care: I’m using the oldest drives I’ve got as the parity drives, and the newer, larger drives for the data.
If i were doing the build now and not 5 years ago, I might pick a different solution but there’s something to be said for an option that’s dead simple (looking at you, zfs) and likely to be reliable because it’s not doing anything fancy (looking at you, btrfs).
From a usage (not technical) standpoint, the most equivalent commercial/prefabbed solution would probably be something like unraid.
A tool I’ve actually found way more useful than actual raid is snapraid.
It just makes a giant parity file which can be used to validate, repair, and/or restore your data in the array without needing to rely on any hardware or filesystem magic. The validation bit being a big deal, because I can scrub all the data in the array and it’ll happily tell me if something funky has happened.
It’s been super useful on my NAS, where it’s the only thing standing between my pile of random drives and data loss.
There’s a very long list of caveats as to why this may not be the right choice for any particular use case, but for someone wanting to keep their picture and linux iso collection somewhat protected (use a 321 backup strategy, for the love of god), it’s a fairly viable option.
Uh, don’t do that if you expect your mail to be delivered.
Multiple PTRs, depending on how the DNS service is set up, may be returned in round-robin fashion, and if you return a PTR that doesn’t match what your HELO claims you are, then congrats on your mail being likely tossed in the trash.
Pick the most accurate name (that is, match your HELO domain), and only set one PTR.
(Useless fact of the day: multiple A records behave the same way and you can use that as a poverty-spec version of a load balancer.)
sudo smartctl -a /dev/yourssd
You’re looking for the Media_Wearout_Indicator which is a percentage starting at 100% and going to 0%, with 0% being no more spare sectors available and thus “failed”. A very important note here, though, is that a 0% drive isn’t going to always result in data loss.
Unless you have the shittiest SSD I’ve ever heard of or seen, it’ll almost certainly just go read-only and all your data will be there, you just won’t be able to write more data to the drive.
Also you’ll probably be interested in the Total_LBAs_Written variable, which is (usually) going to be converted to gigabytes and will tell you how much data has been written to the drive.
As a FunFact™, you’re more likely to have the SSD controller die than the flash wear out at this point.
Even really cheap SSDs will do hundreds and hundreds of TB written these days, and on a normal consumer workload we’re talking years and years and years and years of expected lifespan.
Even the cheap SSDs in my home server have been fine: they’re pushing 5 years on this specific build, and about 200 TBW on the drives and they’re still claiming 90% life left.
At that rate, I’ll be dead well before those drives fail, lol.
two commands: dd and resize2fs, assuming you’re using ext4 and not something more exotic.
one makes a block-level copy of one device to another like so: dd if=/dev/source-drive of=/dev/destination-drive
the other is used to resize the filesystem from whatever size it was, to whatever size you tell it (or the whole disk; I’d have to go read a manpage since it’s been a bit)
the dd is completely safe, but the resize2fs command can break things, but you’d still have the data on the original drive, so you could always start over if it does - i’d unplug the source drive before you start doing any expansion stuff.