On this topic, I am optimistic on how generative AI has made us collectively more negative to shallow content. Be it lazy copypaste journalism with some phrases swapped or school testing schemes based on regurgitating facts rather than understanding, none of which have value and both of which displace work with value, we have basically tolerated it.
But now that a rock with some current run through it can pass those tests and do that journalism, we are demanding better.
Fingers crossed it causes some positive in the mess.
Exactly
I hope it has same effect than mechanization for menial work. It raises the bar for what people expect other people to do.
Long term it helps reach a utopia, short term there will be a lot of people impacted by it.
deleted by creator
Not the same thing, dog. Being inspired by other things is different than plagiarism.
Humans learn from other creative works, just like AI. AI can generate original content too if asked.
AI creates output from a stochastic model of its’ training data. That’s not a creative process.
What does that mean, and isn’t that still something people can employ for their creative process?
LLM AI doesn’t learn. It doesn’t conceptualise. It mimics, iterates and loops. AI cannot generate original content with LLM approaches.
Interesting take on LLMs, how are you so sure about that?
I mean I get it, current image gen models seem clearly uncreative, but at least the unrestricted versions of Bing Chat/ChatGPT leave some room for the possibility of creativity/general intelligence in future sufficiently large LLMs, at least to me.
So the question (again: to me) is not only “will LLM scale to (human level) general intelligence”, but also “will we find something better than RLHF/LLMs/etc. before?”.
I’m not sure on either, but asses roughly a 2/3 probability to the first and given the first event and AGI in reach in the next 8 years a comparatively small chance for the second event.