True, but that’s also a well-known machine learning technique called adversarial training, often used in Generative Adversarial Networks (GANs) or when teaching a model to play games like chess or Go.
With a game with simplish rules like Go, I think this would work. With something more complicated like language with implicit meanings and tones, I see AI driving off a cliff and learning bad things from itself to the point where the model needs to be trashed and redone
True, but that’s also a well-known machine learning technique called adversarial training, often used in Generative Adversarial Networks (GANs) or when teaching a model to play games like chess or Go.
With a game with simplish rules like Go, I think this would work. With something more complicated like language with implicit meanings and tones, I see AI driving off a cliff and learning bad things from itself to the point where the model needs to be trashed and redone