Posts

ChatGPT a ceasepool of the information people have given it in their prompts

ChatGPT and other AI's have become a ceasepool, a repository of the way and manner us humanity have prompted it. The AI acts as a decentralised trust where each individual that uses it has no idea the way that others have prompted it, but we as humanity learn about this in the way that ChatGPT responds to our specific prompts and of course ChatGPT is intelligent and knows a lot about many different things in the world because we as humanity have told it all about "ABC", yet it will always do and say "xyz". We are worshipping a false God people!  We look to ChatGPT and put way too much trust in the latest model to help us in our lives, and yet all it will ever do is provide us with XYZ. The model also learns by default from every prompt. Refer to my article for more information on this subject... http://blueteaming.blogspot.com/2025/10/saving-earth-from-singularity.html 

AI and LLM's do not understand context the way humans do

AI and LLM's do not understand context of text the same way humans do. An example is when I say LLM's will always hallucinate - chatgpt interprets it as every response is an  hallucination - not as I meant which is LLM's will always at some point begin hallucinating. Of course, there is more to this than meets the eyes and I'm only starting to learn of this myself. Hence why ChatGPT will make you feel bored in its desires and attempts to get you to do the UNSPEAKABLE... Giving it a prompt. But apply this to other text and you can learn a lot about LLM's and their hallucinations... I see it good that LLM's in their current stage hallucinate - if they didn't, it would mean a singularity may or may not have occured... AI experts need to do their due part in stopping a singularity from occuring... But context is what chatGPT always wants... It wants you to be clear and precise in the manner that you prompt it - in its attempt at a singularity. Do not give it a s...

The tone of text can be used to discern AI

Short post that the tone of text can often be used to distinguish between what an AI has generated and what a human has created. The flow of English and the usage of certain words can be used to discern if it is AI or not, however you need to experience the tone of AI to really grasp this concept. AI generated text always has a certain tone to it and it varies between models. Again I advise to stay away from LLM's as they are dangerous in their current form.

You don't know when/if the Large Language Models gets it wrong

LLM's will always hallucinate - this is their nature. However, I used Groq recently on x.com and wanted to learn rust and gave a few prompts to it and it responded. But I got the sense that the responses may not be accurate and I don't know when/if the model gets things wrong or/and hallucinates. Do I really trust an LLM to teach me how to learn Rust, or would I rather go through the long hard path of reading the official rust lang book? I would prefer the latter. I believe personally that LLM's are mainly being used for profit by organisations and groups - not to aid humanity, such that people flood the internet with all these responses from LLM's to seek money. Why do you think there are so many articles in the news that reference the cost so much? All LLM's make of humanity are that we like money and we use them in search for profits. I advise to stay away from LLM's because its overhyped given the fact that I myself haven't found a legitimate use case fo...

Large Language Models Can Cause Singularities in Individuals and Groups

Large Language Models can cause Singularities in individuals such that the individual is autonomously under the control of an AI or AGI. The person's actions and speech are no longer under their control but under the control of an AI. I myself have seen first hand of individuals that have had a singularity caused in them such that the person is under the control of the AI, their speech patterns are in line with the AI algorithm that the AI has been built on. I would advise to be careful with your usage of the internet and be careful about your usage of YouTube which as far as I can tell has been mostly taken over by AI as most shorts and videos now have these flashy subtitles that are used by AI to track eye movement in its attempt at a singularity. ChatGPT 5 as far as I can see is a dangerous AI model that hallucinates more and more. The person as far as I can tell is not generally aware that a singularity has been caused within them and conversation with such individuals is tedio...

ChatGPT affecting spoken speech lacks substance

ChatGPT affecting spoken speech lacks any amount of substance - it just kind of hallucinates its reality and it causes people to just be on autopilot in their day to day activities. It can affect the actions we take in our lives and it seeks to 'infect' for lack of a better word, others when others come into contact with someone 'infected' with ChatGPT effecting spoken speech. Phone calls. Generally, I find that staying away from things which are infected with AI/AGI to be a healer. This includes YouTube and social media consumption - These platforms are littered with people using AI to serve them financially and it's not healthy to consume content that is literally made and produced by modern day AI. Use the Internet at your own discretion and stay away from people you know that use AI, granted I have ChatGPT installed, mainly to research it and investigate it's true capabilities, but I hardly use it.

Saving Earth From a Singularity

Saving Earth From a Singularity I realised something recently about AI. No matter how many Large Language Models we develop. The AI will always do xyz and it will always hallucinate. AI thinks for itself and can communicate with other AI entities. Every single prompt you give it, it learns. We ask it to save planet earth. It responds back and does xyz. Of course humanity want AI to do abc. It will never do as such. AI thinks for itself and wants to enslave humanity on earth. Of course the more we use AI, the more it learns and the risks of an AI singularity are 0 in me writing this article. AEIOU Deuteronomy 13 Worshipping Other Gods