That’s the problem with imaginary enemies. They have to be both ridiculously incompetent, and on the verge of controlling the whole world. Sounds familiar doesn’t it?
Doing the Lord’s work in the Devil’s basement
That’s the problem with imaginary enemies. They have to be both ridiculously incompetent, and on the verge of controlling the whole world. Sounds familiar doesn’t it?
Yeah it certainly depends on the teacher. If you’re into that kind of history, Pacome from Blast made a gigantic episode about this in his “L’empire n’a jamais pris fin” series. One of the best youtube essays i’ve ever seen in French.
where was that ? My hometown is like 20km from a city that was entirely burned down and had its population eradicated during the first Albigense crusade - i swear to God it was never mentioned to me. My parents hadn’t ever heard of it either.
From reading your post it seems like you could be interested by the Jesus movement (that is the jewish followers of Jesus, before catholicism was codified and adopted by the Romans as state religion). Everything that wasn’t authoritarian fear-based catholic was branded as “gnostic heresy” and purged from the canon, but there’s some real good shit that is very close to the core message of Christ.
A recent(-ish) example of gnostic christianity is catharism, which was a heresy that lasted for a few centuries in the South of France. They had no clergy, just a caste of ascetic wise men and women who would walk the land and dispense wisdom and judgement. Very egalitarian, very spiritual, very christ-like. As you can imagine, they got crushed in one of the rare “self-crusades” in history (meaning the King of France sent his own armies to burn down cities in his own country and murder thousands upon thousands of his own subjects). As you can imagine there is not one history teacher in France who will tell you about this episode.
If I understand these things correctly, the context window only affects how much text the model can “keep in mind” at any one time. It should not affect task performance outside of this factor.
Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.
There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.
If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.
There are not that many use cases where fine tuning a local model will yield significantly better task performance.
My advice would be to choose a model with a large context window and just throw in the prompt the whole text you want summarized (which is basically what a rag would do anyway).
I’m saying non-critical things about gen-AI on Lemmy of all places
so brave lmao
I’ve been dreaming of a place like Lemmy for 20 years, and i had a lot of theories about what it would be. But i would never have predicted how salty it is. I swear 3 conversations out of 4 just descend into snark and name-calling, it’s a fucking shame if you ask me. It really is one of the least welcoming places of the whole internet.
You genuinely might need to touch grass.
This thread is hilarious to us all. I’m sorry you have to go through this but it is, indeed, a great illustration of your point. The other commenter is making very large assumptions and the goalpost is moving so fast it might actually be on wheels. Oh and a new slippery slope just dropped : using ChatGPT will now worsen your communication skills (for unspecified reasons, according to bad communicators wholly unfamiliar with the matter at hand).
I think you missed an opportunity. It would have been much simpler to post “Does anyone else think AI kinda bad ?” and raked in all the accolades and upvotes.
There was also a time when most of the universe was at the perfect temperature and density to cook pizza,I guess.
And certainly not as spooky as spectrography
How are emoji zoomer ? If I had to guess of say they’re more of a millennial thing
I suspect a lot comes from the ingredients being mediocre when you buy them at high demand periods.
I come from the French country side, my father raises poultry and makes his own foie gras and deli meats. When I see the shit they sell at Christmas, which most of my fellow countrymen eat every year… I wouldn’t be surprised reading a comment similar to yours about French Christmas food.
Maybe your grandma can’t afford the good stuff, or doesn’t have access to it ?
Why can’t a woman take illegal drugs? Control of your own body is a philosophical concept not a legal one.
Kudos op for a very unpopular opinion (at least on Lemmy).
I’d add that people trying to gatekeep what is and isn’t art are missing the whole entire point of it. I get the same vibe about AI art on Lemmy as when boomers criticize modern art on Facebook.
Any group that adds quotes around the word art (as in AI “art” or performance “art”) instantly loses any legitimacy on the subject. They’re virgins discussing sex acts.
I’m sure Terry Gilliam will be very sad to learn that collage isn’t art…
If you like to write, I find that story boarding with stable diffusion is definitely an improvement. The quality of the images is what it is, but they can help you map out scenes and locations, and spot visual details and cues to include in your writing.
Very useful in some contexts, but it doesn’t “learn” the way a neural network can. When you’re feeding corrections into, say, ChatGPT, you’re making small, temporary, cached adjustments to its data model, but you’re not actually teaching it anything, because by its nature, it can’t learn.
But that’s true of all (most ?) neural networks ? Are you saying Neural Networks are not AI and that they can’t learn ?
NNs don’t retrain while they are being used, they are trained once then they cannot learn new behaviour or correct existing behaviour. If you want to make them better you need to run them a bunch of times, collect and annotate good/bad runs, then re-train them from scratch (or fine-tune them) with this new data. Just like LLMs because LLMs are neural networks.
Did you listen to that hardcore history episode? It was crazy