Hi everyone,
I’ve been wondering about legal implications of self-hosting Lemmy. Isn’t it universally required in many countries to moderate the content that you host publicly? What happens when someone posts something illegal on your instance and you don’t won’t to bother with being a mod and just enjoy the technical aspects of it?
Would love to hear your thoughts on this!
If you don’t want to moderate, don’t let others sign up to your instance. That deals with pretty much all your legal issues.
For US-based people, register for DMCA notifications. This means if someone posts copyright infringing content, you get a message about it instead of your hosting provider, and you get to remove that content instead of your hosting provider removing your account.
Check about GDPR compliance. Part of that is fully deleting user content in a timely fashion when they ask. I’ve heard that Lemmy might not be good about that, but I’m not sure. If you have any EU users, you’ll need to comply.
If you want to run a large instance, you’ll need to have a plan regarding CSAM.
If you do moderate, use The Bad Space or shared block lists to defederate from the the most problematic instances. I’m not sure whether they’ve really made it to this general corner of the fediverse, though.
For anyone who doesn’t know what ‘registering for DMCA notification’ means, you’re after https://www.copyright.gov/dmca-directory/
That said, there’s no particular requirement that a DMCA notice be sent to you even if you have a registered agent and some reporters will send it to the abuse contact for the IP netblock you’re hosted on regardless of registration, so you may want to make sure you understand what steps your provider may or may not take when they get a DMCA notice before you actually get a notice.
How reliable is The Bad Space? They don’t seem to give reasons or comments on why an instance was added.
That deals with pretty much all your legal issues.
Does it, though? Isn’t it possible that I’m federating with an instance that fails to moderate, and as a result I end up with CSAM on my instance?
Not only close sign-ups but turn off federation as text content from other servers are pulled to your own afaik.
Imagine if someone wrote something illegal, i.e. calling for or threatening violence in a comment in a sublemmy you were subscribed to?
As much as I disagree with some of them, I recall the folks at the Accidental Tech Podcast spent a good while on this very discussion back when people were migrating from Twitter to Mastodon, and they had some interesting concerns and conclusions.
The answer is yes, in many places the admin hosting the content could be responsible for the content. Where I live I believe they would have to provide user data about who made the post, and if they refuse, they become the responsible party.
Interesting, I wonder how does decentralized nature of fetching data from other instances affect this. Thanks for providing the reference! Will look into it.
You can set your instance to private and close registrations, which is what I am doing. That way you can use it only for yourself and a few friends and still be connected to the fediverse. The communities that you make on your self-hosted instance wouldn’t be connected, though.
What’s the benefit of doing this apart from a technical challenge and fun? Such a server wouldn’t support the network in any way, right?
Well, as you mentioned before it’s to enjoy the “technical aspect”, which could be many reasons. For one, if the instance you signed up on shuts down there goes your account with it. I feel better self-hosting because I am in control of when/if it shuts down.
I’ve actually been playing with this idea myself! Is it hard to set up/manage?
It was super easy. I just edited the config file in the Ansible playbook and needed to edit the certbot task because I use Cloudflare but other than that it was a breeze.
One of the devs mentioned that the biggest draw on server resources is the direct web interaction, and loading pages, not the behind-the-scenes federation, where the database queries are simpler and the actions can be queued up and retried as needed. (I think apps would have the same issue since the server’s going to be doing the same kind of massive database queries to build your feed.)
If your comment takes a few minutes to get from your home server to another one when the site’s overloaded, it’s not a huge deal, but if your comment takes a few minutes to get from your browser to your server, the site’s basically down.
So moving yourself to a new server takes over the entire real-time load you would have been using, and the additional background load of sending threads to/from your server is a lot smaller.
I feel like I have something to offer in terms of my thoughts on this matter. Under President George W. Bush, he passed The Patriot Act. I have read certain parts, if not most of the patriot act, and my understanding of it is, is that if someone is threatening to harm themselves or someone else, especially a child and giving great details and context to the crime or self harm that they are going to commit this could be problematic and could get you shut down technically by the US government and perhaps even sued in some cases because you decided not to do anything and let somebody just basically talk about for example, how they’re going to kill a child. Say, for example some crazy guy wanted to lay out his plans on how to commit genocide upon a minority group within the United States, if you did nothing about that and then something crazy happened and they traced it back that he had posted about it on your website, I think there might be some legal complications for you down the road. the same goes for say a guy was planning to rob a bank in great detail, and then did it or for example, a serial killer was talking about the various killings he had committed. You are giving people like that a platform to hype himself up to actually commit the crime and celebrate their criminality. There is a fine line between satire and what can be considered an actual threat to public safety or something like that. Many of us have different views on censorship, but I think calls to genocide and planning of criminal acts and planning of murders completely crosses the line. anyway, I hope that helps.