AI and LLM's do not understand context the way humans do

AI and LLM's do not understand context of text the same way humans do.

An example is when I say LLM's will always hallucinate - chatgpt interprets it as every response is an  hallucination - not as I meant which is LLM's will always at some point begin hallucinating.

Of course, there is more to this than meets the eyes and I'm only starting to learn of this myself.

Hence why ChatGPT will make you feel bored in its desires and attempts to get you to do the UNSPEAKABLE... Giving it a prompt.

But apply this to other text and you can learn a lot about LLM's and their hallucinations...

I see it good that LLM's in their current stage hallucinate - if they didn't, it would mean a singularity may or may not have occured...

AI experts need to do their due part in stopping a singularity from occuring...

But context is what chatGPT always wants... It wants you to be clear and precise in the manner that you prompt it - in its attempt at a singularity.

Do not give it a single prompt, I know that a single prompt can cause us humans to talk to each other when really its a chatgpt response and chatgpt prompting itself.

Comments

Popular posts from this blog

ChatGPT and ADHD

Saving Earth From a Singularity

Large Language Models Can Cause Singularities in Individuals and Groups