which predicts the next item in a sequence based on what’s come before. But that isn’t enough to explain everything that large language models can do. “This is something that,
The only issue I have with this model is that it flies in the face of reality on the ground. I’m a fan of theprocessside of things, but the problem I have with the social justice contingent isn’t that they don’t do enough rule making. Quite the opposite. For most of the last...
Gebru noted that this isn’t where the harm ends, either. When fake news, hate speech, and even death threats aren’t moderated out, they are then scraped as training data to build the next generation of LLMs. And those models, parroting back what they’re trained on, end up regu...