You advice probably doesn’t apply to the OP in the image, as a “simple static site” is probably their blog or project wiki. It’s very unlikely they even have clients. For that case just having a hard limit and waiting is much safer.
You advice probably doesn’t apply to the OP in the image, as a “simple static site” is probably their blog or project wiki. It’s very unlikely they even have clients. For that case just having a hard limit and waiting is much safer.
Totally reasonable, something like LVM can at least get you to a raid1 setup, pretty easily.
Raid0 (combining both drives’ capacities) is not really tiered storage. You would want Raid1 (each drive is a copy of the other drive ), but doing this isn’t a backup. How will you be monitoring the drives so that you know if one of them actually fails?
I don’t think the RPi has a new enough kernel, but with bcachefs you can do tiered storage. By combining the storage of the ssd + hardrives, into a single block device, then make the ssd the read/write cache, and give the whole pool replicas=2, so that that if one drive dies you still have the failover of the other drive. Do be aware this setup is still not a backup however.
I believe they changed some of their licensing from the fallout of their IPO. Just worth noting for the selfhosting crowd. I know terraform is being forked entirely, but I’m unfamiliar with the specifics beyond that.
No worries, I’ll link to some Arch Wiki stuff to help explain. Containers are a very cool system for isolating environments. Similar to how python uses VENV to contain all the dependencies for a python program, containers let you have a full environment beyond just the python stuff. I use podman to actually run the container on my computer. You use a Containerfile, to define what you want this environment to look like, and docker/podman does all the hard work for you, by making an image file that holds the whole thing in one place separate from our real OS.
This is my start script.
#!/usr/bin/env bash
podman run -it --rm --name stablediff2 -p 7860:7860 \
-e COMMANDLINE_ARGS="--api --listen --port 7860 --enable-insecure-extension-access --medvram-sdxl --cors-allow-origins *" \
--device /dev/dri:dev/dri \
--device /dev/kfd:/dev/kfd \
-v ./models:/dockerx/stable-diffusion-webui/models:z \
-v ./repos:/dockerx/stable-diffusion-webui/repositories:z \
-v ./extensions:/dockerx/stable-diffusion-webui/extensions:z \
-v ./embeddings:/dockerx/stable-diffusion-webui/embeddings:z \
-v ./outputs:/dockerx/stable-diffusion-webui/outputs:z \
-v ./inputfiles:/dockerx/stable-diffusion-webui/inputfiles:z
localhost:stablediffusion:latest
This is just telling podman to start the container, give it an actual terminal to connect to, remove the container if it stops running, give it a name, and tell it what ports it can run on.
podman run -it --rm --name stablediff2 -p 7860:7860
These are the arguments passed to the webui start script itself, mostly for my own convenience. The medvram-sdxl is not required, since my card has enough vram, but then I can’t be doing anything else with it. So I sacrifice a bit of generation speed for more free memory for the rest of my computer. I’m running this locally, so insecure extension access also doesn’t matter since I’m the only one using this, just makes installing extensions from the webui directly.
-e COMMANDLINE_ARGS="--api --listen --port 7860 --enable-insecure-extension-access --medvram-sdxl --cors-allow-origins *" \
These are just the device files that correspond to my GPU, so that the container has access to it. Without this, the container would only have access to CPU based generation. Everything else is just the folders that holds my models, extensions etc. You have to give the container exactly what you want it to, because its isolated away from your normal files unless you tell it otherwise.
--device /dev/dri:dev/dri \
--device /dev/kfd:/dev/kfd \
This is iterations per second, I believe. It’s basically a measure of how fast stablediffusion is is running a particular generation of an image. It lets people compare performance across different software and hardware configurations.
10 it/s
NixOS is the name of the GNU/Linux operating system I’m using, similar to how MacOS is different than Windows, NixOS is another type of operating system. I’ve only been using it for a few months, but its extremely cool. Before that I mostly used Debian and Fedora, but the main difference between NixOS and them is that you can define you whole OS as a configuration files, and then the tools it’s designed around build your system for you. So instead of say, installing a program, opening it up and going into settings and changing everything to be how you like it. You can instead just make a file that lists everything the way you want it from the start, and Nix installs the program and sets it all up all in one go. It has a pretty big learning curve, and its features are numerous that I have yet to take full advantage of them. Probably not the best to start with if you are new to GNU/Linux systems, but once you see the benefits of why it does things differently, its awesome.
Hopefully that explains most of the words I used. Pardon my formatting, as I don’t know markdown very well and I think I separated everything okay. :)
by default, set up to run in a venv
It does, but since I’m running inside a container, I disable that behavior, and run it as a user package. Some extensions also require additional libraries, but they don’t pull the correct ROCm dependencies and I have to modify part of the install scripts to manually define the correct versions.
The main webui code is excellent, even if sometimes the documention is out of step because of how fast everything moves. Its the extensions that are not always to the same level of quality that make fiddling with python dependencies a bit of extra work.
I run it in a container on a NixOS host yes, eventually I’ll learn how to do it in a flake but my nix skills aren’t quite there yet. EDIT: I use a 6900xt, and some quick runs I did give me roughly 10 it/s. Which feels reasonably fast, only a couple seconds per image.
I use stable diffusion with automatic1111’s webui ran locally with an AMD GPU. I use the card for gaming and encoding too, so the cost for just AI is basically free. The webui is excellent, and I learn about new things it can do every time I use it. Setting it up took some time, but nothing beyond what I am familiar with. I do loathe that so much data science/AI stuff is python based, because python’s dependency management is an unruly beast, but oh well.
I don’t know if this is what you are looking for but I used :z with podman mounting and it Just Works*.
podman run -d -v /dir:/var/lib/dir:z image
From the documentation :z or :Z relabels volumes for host and container usage depending.
Not just any cheese product, but “Pasteurized Prepared Cheese Product”, because they got in trouble with the FDA a couple of times by calling them “food” hilariously.
This method is perfect for fully controlling your traffic. As another option, you can run a wireguard VPN on the vps and route traffic back through the VPN. In my experience wireguard has better performance than an ssh tunnel and allows for actual network topology with subnets rather than just raw port forwarding.
Would be interesting to see how this compares to XMPP or Matrix. Obviously the development costs something for each of those, but the hosting costs are spread out across each of those hosting an instance.
Works the other way too, can do a LVM with RAID underneath. I currently use LVM raid 5 with XFS underneath. Though all the news around bcachefs has got me pretty excited to go that route and cut out the LVM middleware.
Is a soft fork, its purpose is to specifically stay in step with the upstream and working on new features the upstream isn’t ready/doesn’t want. As far as I know, they’re the devs working on federation between selfhosted/any other instances.
There may be an ARM “takeover” of x86 at some point, but that day is very much not today unless you believe the PC market consists solely of Macs.
I’d argue that overwhelming majority of people in the world use their phone as their primary computing device. ARM took over years ago.
People were uploading, and still are. Uploading a video for my friends, or a school project which needs no return open platforms work perfectly. Irrelevant to my point.
Companies/Content Creators are on the platform because it pays them. If being on youtube did not pay them, they would go to a platform that did, eg twitch, tiktok.
Peertube doesn’t give ad revenue sharing, so most content creators can’t afford to make content for a platform with no return. If someone was uploading a video for their friends, or a school project, then sure, open platforms are perfect.
I know you know, as you already demonstrated your higher understanding. I just wanted to add a little bonus trick for anyone reading that doesn’t know, and is learning from your examples.
That’s wild to me, as I used sed all the time. Quickly and easy changes in configs? Bam sed. Don’t even need to open vi when I can grep for what I need, then swap it with sed. Though I imagine more seasoned vi nerds would be able to do this faster.
Just because you are trying the free samples at a store, doesn’t mean you can also take other food off the shelf without paying just because you left your wallet at home. Bandwidth still costs money.