As others have mentioned the stars have been largely useless in the last little while so to be honest I’m not sure this has any impact. Even sites that try and give a rating based on fake reviews are not helpful because so many reviews are faked. The only helpful part is to try and read negative reviews.
I imagine this star fiasco is something that’s easy for browser plugins to reverse.
I would love to see AI and Machine Learning used to filter out fake reviews. This would actually be useful.
That’s some War Games shit…The CPU usage grows through each iteration of AI learning to post better reviews and AI debunking the better reviews it has posted until there’s no more room for anything else and the LLM becomes consumed with this one task of beating itself
True, but that’s also a well-known machine learning technique called adversarial training, often used in Generative Adversarial Networks (GANs) or when teaching a model to play games like chess or Go.
With a game with simplish rules like Go, I think this would work. With something more complicated like language with implicit meanings and tones, I see AI driving off a cliff and learning bad things from itself to the point where the model needs to be trashed and redone
I just noticed this.
As others have mentioned the stars have been largely useless in the last little while so to be honest I’m not sure this has any impact. Even sites that try and give a rating based on fake reviews are not helpful because so many reviews are faked. The only helpful part is to try and read negative reviews.
I imagine this star fiasco is something that’s easy for browser plugins to reverse.
I would love to see AI and Machine Learning used to filter out fake reviews. This would actually be useful.
Use LLMs and machine learning to detect the reviews created by LLMs and machine learning.
That’s some War Games shit…The CPU usage grows through each iteration of AI learning to post better reviews and AI debunking the better reviews it has posted until there’s no more room for anything else and the LLM becomes consumed with this one task of beating itself
True, but that’s also a well-known machine learning technique called adversarial training, often used in Generative Adversarial Networks (GANs) or when teaching a model to play games like chess or Go.
With a game with simplish rules like Go, I think this would work. With something more complicated like language with implicit meanings and tones, I see AI driving off a cliff and learning bad things from itself to the point where the model needs to be trashed and redone