• Deestan@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    On this topic, I am optimistic on how generative AI has made us collectively more negative to shallow content. Be it lazy copypaste journalism with some phrases swapped or school testing schemes based on regurgitating facts rather than understanding, none of which have value and both of which displace work with value, we have basically tolerated it.

    But now that a rock with some current run through it can pass those tests and do that journalism, we are demanding better.

    Fingers crossed it causes some positive in the mess.

    • trolololol@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Exactly

      I hope it has same effect than mechanization for menial work. It raises the bar for what people expect other people to do.

      Long term it helps reach a utopia, short term there will be a lot of people impacted by it.

    • Octopus1348@lemy.lol
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      6 months ago

      Humans learn from other creative works, just like AI. AI can generate original content too if asked.

      • Prunebutt@slrpnk.net
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        AI creates output from a stochastic model of its’ training data. That’s not a creative process.

      • steakmeoutt@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        6 months ago

        LLM AI doesn’t learn. It doesn’t conceptualise. It mimics, iterates and loops. AI cannot generate original content with LLM approaches.

        • Quik@infosec.pub
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Interesting take on LLMs, how are you so sure about that?

          I mean I get it, current image gen models seem clearly uncreative, but at least the unrestricted versions of Bing Chat/ChatGPT leave some room for the possibility of creativity/general intelligence in future sufficiently large LLMs, at least to me.

          So the question (again: to me) is not only “will LLM scale to (human level) general intelligence”, but also “will we find something better than RLHF/LLMs/etc. before?”.

          I’m not sure on either, but asses roughly a 2/3 probability to the first and given the first event and AGI in reach in the next 8 years a comparatively small chance for the second event.