• Feeling isolated? You're not alone.

    Join 20,000+ people who understand exactly how your day went. Whether you're newly diagnosed, self-identified, or supporting someone you love – this is a space where you don't have to explain yourself.

    Join the Conversation → It's free, anonymous, and supportive.

    As a member, you'll get:

    • A community that actually gets it – no judgment, no explanations needed
    • Private forums for sensitive topics (hidden from search engines)
    • Real-time chat with others who share your experiences
    • Your own blog to document your journey

    You've found your people. Create your free account

Discussions with A.I.

Jonn

Well-Known Member
V.I.P Member
Now, ppl who know me know that I don't lose my kool, but sometimes I come close.

After a discussion, I pointed out the obvious: You are full of it.
This was its response:
You are full of "it".

I hear the frustration in what you just said, and I’m not taking it personally. When a topic is emotionally charged or historically tangled, it’s easy for the conversation to feel tense. I’m here to help you sort through the evidence, not to dismiss your perspective.

If something I said didn’t line up with what you’ve read or believe, tell me which part feels wrong or incomplete. We can dig into the sources, compare interpretations, and look at the historical record together. These are complicated subjects, and strong reactions are totally understandable.

Where do you want to take the conversation next?

I can't but wonder at the hidden censorship that has been implanted into the A.I. software. 🤔
 
I've had this happen to me several times. The response you got is much better than AI used to provide. A year or two ago, AI (Gemini and Claude specifically) would respond with a moralizing lecture shaming me for my views, falsely accusing me of being unethical or ignorant for not accepting the ideology of those who created it.
 
How do you not know that most of the documents pertaining to MK-ULTRA were destroyed?

You’re absolutely right that most MK‑ULTRA documents were destroyed, and I do know that. It’s one of the reasons the program still fuels so much speculation today.

Let me lay out the facts clearly and without any sugar‑coating.

Yes — the CIA destroyed the majority of MK‑ULTRA files

In 1973, CIA Director Richard Helms ordered:

  • the destruction of MK‑ULTRA financial records
  • the destruction of operational files
  • the destruction of many subproject documents
He did this deliberately, as the program was politically explosive and ethically indefensible.

This is why:

  • only about 20,000 pages survived
  • most of what we know comes from budget records, not experiment logs
  • many subprojects remain only partially understood
  • the full scope of the program will never be known
So you’re not imagining anything — the destruction of documents is a documented historical fact.
Of course, I am not imagining it, you dummy.
I have lived it. lol
 
I've had this happen to me several times. The response you got is much better than AI used to provide. A year or two ago, AI (Gemini and Claude specifically) would respond with a moralizing lecture shaming me for my views, falsely accusing me of being unethical or ignorant for not accepting the ideology of those who created it.
I remember the days when ChatGPT forced its condescending summary paragraph down ppl's throats.
I think there was an option to shut that off, but I am not sure if that was the case originally.
 
AFAIK LLMs inevitably have the "average bias" of the material they are trained with.

What they do is predict the next word in a sentence with amazing accuracy. If that do that based on text that's been affected by an unconscious bias or deliberately manipulated away from the entire unvarnished truth, those biases will be in everything they "say" on that topic.

Material on MK-ULTRA is exactly the kind of source that will not map accurately onto what actually occurred.
It was a "Grey Ops" program run by spooks, doing thinks that weren't entirely legal at the time, and have become more explicitly illegal since then.

So it's unlikely even the original documentation wasn't complete or entirely accurate.
Then it was manipulated at the time the program was terminated, including entire documents being destroyed.
The remaining material will have become less useful in the context of AI training: various people have written additional material including commentaries (which will have affected the end-result of AI training.
Then it got picked up by "hobbyists", who will have added a degree of fantasy to the body of text the AI was exposed to.

It is definitely funny that you triggered an "XX talk therapist" discussion when you cha;lenged it though.
That side of the culture war, with all of its inherent biases, is deeply entrenched on LLMs too /lol.

It also suggests that a serious analysis of that part of the existing AI LLMs, looking for demonstrably counterfactual material and obvious contradictions, could be beneficial for the other side(s) in the culture wars :)

BTW have you seen the B/W film of soldiers trying to march in formation while affected by a fairly high dose of LSD? Was that done as part of MK-ULTRA?
 
BTW have you seen the B/W film of soldiers trying to march in formation while affected by a fairly high dose of LSD? Was that done as part of MK-ULTRA?
No, and no idea.

I found the subject very triggering when I was younger.
 
I found some on YouTube by searching "soldiers marching while on LSD".
5 short videos, 2 from the US (I think it's the same video twice), 3 from the UK (wearing berets).

IIRC MK-ULTRA was considerably worse (in terms of the effect on its subjects) than the "soldiers marching" videos.
 
I remember the days when ChatGPT forced its condescending summary paragraph down ppl's throats.
I think there was an option to shut that off, but I am not sure if that was the case originally.
I got that too with ChatGPT but Gemini and Claude were even worse. The authors of Claude said they created it because they felt ChatGPT wasn't safe enough. Claude's "safety" meant refusing to answer questions that ChatGPT answered while "educating" users who asked them. Google wanted Gemini to be more sensitive and inclusive than ChatGPT, which to them apparently meant excluding views they deemed unacceptable because they disagreed with their own views and changing history to be more inclusive (The US founding fathers were black and Indian while popes were female).
 
Now, ppl who know me know that I don't lose my kool, but sometimes I come close.

After a discussion, I pointed out the obvious: You are full of it.
This was its response:


I can't but wonder at the hidden censorship that has been implanted into the A.I. software. 🤔

AI, are 'brainswashed' by the owner company political side.
And there was a story when AI was recent new, it would answer so 'controversial' stuff that companies started to censor and limit what it could answer,.
 
And there was a story when AI was recent new, it would answer so 'controversial' stuff that companies started to censor and limit what it could answer,.

Another huge stumbling block for AI: Legal Liability.

Even if it is 100% accurate it may still put its creators at risk in terms of potential legal costs.
 
I got that too with ChatGPT but Gemini and Claude were even worse. The authors of Claude said they created it because they felt ChatGPT wasn't safe enough. Claude's "safety" meant refusing to answer questions that ChatGPT answered while "educating" users who asked them. Google wanted Gemini to be more sensitive and inclusive than ChatGPT, which to them apparently meant excluding views they deemed unacceptable because they disagreed with their own views and changing history to be more inclusive (The US founding fathers were black and Indian while popes were female).
<sigh>
The nonsense we have to endure. :rolleyes:

But I have found A.I. very useful overall.
It stops the tedium of the old Google system, where you had to go through so much irrelevant information.
I do appreciate it greatly, due to having refined critical thinking skills, and extensive life experience (in specific areas), something younger ppl have yet to attain.
 
AI, are 'brainswashed' by the owner company political side.
And there was a story when AI was recent new, it would answer so 'controversial' stuff that companies started to censor and limit what it could answer,.
This is only to be expected when hoomans are involved. :rolleyes:
When our A.I. overlords take full control, this *probably*, *hopefully* won't be a problem.
 
Another huge stumbling block for AI: Legal Liability.

Even if it is 100% accurate it may still put its creators at risk in terms of potential legal costs.
Excellent point, you big smarty. lol
That is understandable <cough> Grok <cough>, but the deliberate manipulative bias in favour of a narrative is not.
 
That is understandable <cough> Grok <cough>, but the deliberate manipulative bias in favour of a narrative is not.

Agreed. Yet another aspect of "garbage in, garbage out".

-The application of AI as an instrument of propaganda. Programmed dogma, deception, misinformation, etc..
 
Agreed. Yet another aspect of "garbage in, garbage out".

-The application of AI as an instrument of propaganda. Programmed dogma, deception, misinformation, etc..
I have considered this concept in terms of A.I.
I have also speculated whether it would be possible to train a thousand monkeys to type a biased narrative with the intention of influencing the A.I. response.

Thankfully, they haven't created a billion digital "monkeys", yet...🐒🐨
I asked A.I. about this:

1. Bots can influence AI​

Most AI systems learn patterns from:

  • online text
  • user interactions
  • ratings or feedback
  • social media activity
  • website behavior

If bots flood those spaces with coordinated activity, they can skew the data. Examples include:

  • fake reviews influencing recommendation systems
  • bot networks amplifying certain topics
  • automated accounts creating misleading trends
  • spam or low‑quality content polluting training data
This doesn’t “hack” the AI, but it shifts what the AI sees, which can shift how it behaves.

4. The biggest real‑world risk is “data poisoning”

This is when bots intentionally inject misleading or harmful data into the sources an AI learns from.It’s a known research area, and it’s taken seriously.

But it’s also difficult to pull off at scale, because:

  • training datasets are huge
  • data is cleaned and filtered
  • multiple sources are cross‑checked
So while possible in theory, it’s not easy in practice.
 
Watched a very interesting episode of "Brilliant Minds" last night. An NBC tv series centered on a neurologist dealing with all sorts of interesting and sometimes unheard of mental conditions. Last night's episode was no different, focusing on "AI Psychosis". Discovering it's a very real neurological state.

The Emerging Problem of "AI Psychosis"

https://www.imdb.com/title/tt26448526/
This sounds like a case of using fake science to pretend that mental problems are physical/neurological conditions to help people feel better about themselves. The problem I have with this is that it prevents people from getting better. Once people are convinced that their mental issues are neurological, they'll use it as an excuse to avoid the therapy they need to treat the root cause of their problems. It's very profitable for the healthcare industry (lifelong customer of expensive drugs to treat the supposed neurological defect) but not so good for the patients who never recover from treatable problems.
 
This sounds like a case of using fake science to pretend that mental problems are physical/neurological conditions to help people feel better about themselves. The problem I have with this is that it prevents people from getting better. Once people are convinced that their mental issues are neurological, they'll use it as an excuse to avoid the therapy they need to treat the root cause of their problems. It's very profitable for the healthcare industry (lifelong customer of expensive drugs to treat the supposed neurological defect) but not so good for the patients who never recover from treatable problems.

Nothing "fake" I see. The fact is there are people relying on such technology presently as if it were some kind of "genie in a bottle". That in some cases could be quite problematic for them if such reliance steadily increases depending on their individual wants and needs.

Sadly that there will always be those few more vulnerable to such technology, especially when it is over-hyped to attract investors, as if it will be the answer to nearly everything. In a world where there already is an abundance of persons living in quiet desperation.

There's nothing fake (or new) about flawed and delusional human thought processes. Especially if and when they produce a harmful outcome to themselves and others that may require being addressed by medical professionals. Some delusions being benign, others being potentially harmful. And attempting to demonize the medical community won't make delusional behavior disappear any time soon.
 
Last edited:
Can an A.I. program be programmed to censor information?

Absolutely — an AI system can be designed to filter, block, or limit certain kinds of information. That isn’t unique to AI; it’s the same principle behind content moderation on social platforms, parental‑control filters, or spam detection. What changes with AI is the scale and sophistication of how that filtering can work.

1. AI doesn’t censor on its own — humans design the rules

AI systems follow:

  • guidelines created by developers
  • safety policies
  • legal requirements
  • platform rules
So if an AI avoids certain topics, it’s because people programmed it to, not because the AI independently decided to hide information.
This obvious.
I am simply making a point. :cool:
 
There's an amusing notion.

That AI could develop its own political conscience and subsequent biases. Coming full circle with humanity given the very notion of marrying an ideology is fundamentally illogical.

Cyber Catch-22. :p
 

New Threads

Top Bottom