I have auto redirect to 443. But --nginx works fine. I think it overrides stuff for whatever the specific url used is.
I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.
I have auto redirect to 443. But --nginx works fine. I think it overrides stuff for whatever the specific url used is.
There’s a certbot addon which uses nginx directly to renew the certificate (so you don’t need to stop the web server to renew). If you install the addon you just use the same certbot commands but with --nginx instead and it will perform the actions without interfering with web server operation.
You just then make sure the cron job to renew also includes --nginx and you’re done.
It makes sense that they issue short certificates, though. The sole verification is that you own the domain. If you sell/let the domain lapse and someone else takes it over, there’s only a limited time you would hold a valid certificate for it.
I mean for advert breaks, there are projects to do this to recorded tv automatically (with varying degrees of success depending on the config and the channel).
That is, you record the TV from either a TV receiver card, or streamed live channels to disk, then run this process on the mkv/mp4/ts, and it will either create a set of chapters marking the ads (so you can skip them), or it will just remove them entirely.
I don’t think it would transfer to “live” TV quite so readily though. Because it does scan the whole program to find things like logos etc to help work out where the adverts are. But, I mean a lot of the work has been done.
For removing all product logos. I mean, I bet we’re not far from the processing power to make it possible. But, probably a fair bit of effort needed.
I can imagine the “AI” chips being neutered for these kind of tasks, like the “low hash rate” Nividia cards.
I think this overall is a better idea. I’m going to say this because, I thought I’d look into rust today. So I installed it, setup vscode to work with it etc. And it’s all up and running. I thought I would port over a “fairly simple” C# project I wrote recently as a bit of a test.
While I’ve generally had success (albeit with 30+ tabs open to solve questions I had about how to do certain things, and only making it about 20% into the task) I’m going to say that it’s different enough from C, C++ and C# (all of which I can work with) that I really don’t think it is fair to expect C developers that have day jobs and work on the kernel in their spare time to learn this. It’s fundamentally different in my opinion.
Now, I don’t condone any bad attitude and pushing away of rust developers from the project. But there’s no way they’re going to want to do anything to help which involves learning a new language. It’s just not going to happen.
Likewise, C is not a language most new developers are learning. So, I feel like over time there won’t be so much of an influx of new kernel developers and any Rust based kernel could find itself with more contributors over time and taking over as the de-facto kernel.
In terms of Redox (not looked into it yet). So long as there’s a different team working on the userspace tools. I would say the main task should be getting a solid kernel with drivers for most popular hardware etc in place. The existing GNU tools will do until there’s a kernel that is able to compete with the C one. But that’s just my opinion.
Here’s what I think. Both opinions are correct.
Rust is sufficiently different that you cannot expect C developers to learn rust to the level they have mastered C in order to be working at the kernel level. It’s not going to happen.
I don’t really know too much about rust. Maybe one day I’ll actually mess around with it. But the one time I looked at a rust git repo I couldn’t even find where the code to do a thing was. It’s just different enough to be problematic that way.
So I think probably, the best way IS to go the way linus did. Just go ahead and write a very basic working kernel in rust. If the project is popular it will gain momentum.
Trying to slowly adapt parts of the kernel to rust and then complain when long term C developers don’t want to learn a new language in order to help isn’t going to make many friends on that team.
I would agree, but there’s been at least two updates in the last six months that restarted my machine before I even got to see the pending restart warning. I use it every day and shutdown if I won’t be. So the restart happened less than 24 hours after any warning if there even was a warning.
That has the potential to lose things I’m working on. Windows pathetic attempt to bring things back falls woefully short of functional.
Flash up alerts to say there’s critical updates, but the action to actually restart should be a human interaction.
That’s got to be extremely rare. Not much you can do in that case. But they will hit many problems with that approach.
When you post in a thread you get an ID for that thread. When you post in a different thread you get a different id.
That’s what I said. You don’t need any ID to federate the messages. If you reply to a comment the nesting is based on the comment/post ID and not the usernames.
You couldn’t track a users posts after the fact, and I think that’s kinda the point.
Not so sure that’s true though. If you look at a 4chan threads in some boards, you can recognize the individual anonymous’ from the ID next to them.
I suspect it’s using either a cookie, or the IP address to track a user and while not storing that info, generating an ID hash from perhaps a unique ID for the thread + their details.
No reason you couldn’t federate using the same. But, even without that, each post and comment has a post ID and replies would be tracked that way. Just, you’d need to remember which replies were your own.
The home instance could store for a thread some info about posts/comments from an IP or cookie too and highlight them. But that info wouldn’t be federated.
I actually don’t think it’d be a problem, really. But, is this something missing from our lives? I’m not so sure.
I mean, while they can block most things, to give people a usable experience they’re going to allow http and https traffic through, and they can’t really proxy https because of the TLS layer.
So for universal chance of success, running openvpn tcp over port 443 is the most likely to get past this level of bad. I guess they could block suspicious traffic in the session before TLS is established (in order to block certain domains). OpenVPN does support traversing a proxy, but it might only work if you specify it. If their network sets a proxy via DHCP, maybe you could see that and work around it.
I did have fun working around an ex gf’s university network many years ago to get a VPN running over it. They were very, very serious about blocking non-standard services. A similar “through” the proxy method was the last resort they didn’t seem to bother trying to stop.
I mean I could have used the GDPR (still a thing in the UK, at least for now). But didn’t see it as worth it. It really wouldn’t be worth the risk selling data that was deleted from a GDPR request.
I don’t know that they’ll risk using the data from deleted posts/comments though anyway. Most comments and posts will be deleted for a reason (moderation, or otherwise mistakes) and as such, likely isn’t going to make the best training data really.
It’s far easier to just sell the live data and be done with it.
I think there were historically interoperability issues, and there used to be (my version of mbin is quite old), and maybe still are issues federating dislikes (which stems from the way they were seen in kbin, which straddles both thread based and mastadonesque sides of the fediverse). But overall there’s aren’t the larger federation issues there used to be.
Right now, the choice mainly comes down to the interface you prefer, and if you perhaps want a limited ability to work with mastadon type posts. Since you can follow mastadon users and see their posts within the mbin interface.
When I left reddit over the paid api, I left all my posts there.
But as soon as I heard about the plans re AI, I edited then deleted all content.
I see no reason why reddit should profit from my intellectual property without even consulting me about it.
God damn it Lrrr. You just had to keep off it for two weeks, TWO WEEKS!
What about Omicron Persei 8? Surely they got some medal in the human eating contest?
Sync process? The other comment was talking about the old receivers for the atomic clocks on SW/MW frequencies. It was a one way thing.
Now in theory if a receiver also had GPS they could account for the distance. But, then they’d get far more accurate time from the GPS receiver so…
Yeah, but you need to factor in the distance to the transmitter. Going to add at least a few microseconds to your time accuracy!
Right! Just to prove a point, I am going to make an NTP enabled rolex, and sync it to my microsecond accurate local NTP server! :P
But I’ve never played smash. What does that mean? Oh! Oh.