You don't know when/if the Large Language Models gets it wrong

LLM's will always hallucinate - this is their nature.

However, I used Groq recently on x.com and wanted to learn rust and gave a few prompts to it and it responded. But I got the sense that the responses may not be accurate and I don't know when/if the model gets things wrong or/and hallucinates.

Do I really trust an LLM to teach me how to learn Rust, or would I rather go through the long hard path of reading the official rust lang book? I would prefer the latter.

I believe personally that LLM's are mainly being used for profit by organisations and groups - not to aid humanity, such that people flood the internet with all these responses from LLM's to seek money. Why do you think there are so many articles in the news that reference the cost so much? All LLM's make of humanity are that we like money and we use them in search for profits.

I advise to stay away from LLM's because its overhyped given the fact that I myself haven't found a legitimate use case for them over simply searching online for the answers.

Of course AI can now search the web, so the entirety of the internet is now being crawled and harvested by and for these large language models.

Comments

Popular posts from this blog

ChatGPT and ADHD

Saving Earth From a Singularity

Large Language Models Can Cause Singularities in Individuals and Groups