Does it resolve correctly from the laptop or the server. What about resolvectl query server.local
on the laptop?
Does it resolve correctly from the laptop or the server. What about resolvectl query server.local
on the laptop?
Isn’t .local a mdns auto configured domain? Usually I think you are supposed to choose a different domain for your local DNS zone. But that’s probably not the source of the problem?
I’ve setup okular signing and it worked, but I believe it was with a mime certificate tied to my email (and not pgp keys). If you want I can try to figure out exactly what I did to make it work.
Briefly off the top of my head, I believe it was
I can’t remember if there was a way to do this with pgp certificates easily
From looking at the github, I think you don’t need to/want to host this publicly. It doesn’t automatically get and store your information. It’s more a tool for visualizing and cross referencing your takeout/exported data from a variety of tech platforms. It’s just developed as a web app for ease of UI/cross platform/ locally hostable.
Borg append only seems like the way to do this easily
I feel like this really depends on what hardware you have access too. What are you interested in doing?How long are you willing to wait for it to generate, and how good do you want it to be?
You can pull off like 0.5 word per second of one of the mistral models on the CPU with 32GB of RAM. The stabediffusion image models work okay with like 8-16GB of vram.
I’d be surprised if it was significantly less. A comparable 70 billion parameter model from llama requires about 120GB to store. Supposedly the largest current chatgpt goes up to 170 billion parameters, which would take a couple hundred GB to store. There are ways to tradeoff some accuracy in order to save a bunch of space, but you’re not going to get it under tens of GB.
These models really are going through that many Gb of parameters once for every word in the output. GPUs and tensor processors are crazy fast. For comparison, think about how much data a GPU generates for 4k60 video display. Its like 1GB per second. And the recommended memory speed required to generate that image is like 400GB per second. Crazy fast.
Chatgpt is also probably around 50-100GB at most
Second this router! It had the fastest CPU and antenna vs price when I last looked. I run zerotier as a VPN on it an it works great. Plenty of ram and flash for packages too.
Your ISP knows where you’re going anyway. They don’t need DNS for that. They see all the traffic.
On linux and Mac there’s also https://vorta.borgbase.com/ which is pretty good
Your filter rule association is set to ‘rule’. What is that associated rule, and do things work if you change it to ‘pass’?
https://www.reddit.com/r/opnsense/comments/puty62/correct_option_for_filter_rule_association_when/
Instead of connecting with a web browser, can you try using curl or telnet just to check if you’re getting through at the TCP/IP connection level?
I’m sure they’d welcome a pull improving the UX! https://invent.kde.org/network/kdeconnect-kde I think the implementation of the protocol is pretty well isolated from the UI, so pretty radical UI changes should be relatively easy