• Feeling isolated? You're not alone.

    Join 20,000+ people who understand exactly how your day went. Whether you're newly diagnosed, self-identified, or supporting someone you love – this is a space where you don't have to explain yourself.

    Join the Conversation → It's free, anonymous, and supportive.

    As a member, you'll get:

    • A community that actually gets it – no judgment, no explanations needed
    • Private forums for sensitive topics (hidden from search engines)
    • Real-time chat with others who share your experiences
    • Your own blog to document your journey

    You've found your people. Create your free account

Discussions with A.I.

Watched a very interesting episode of "Brilliant Minds" last night. An NBC tv series centered on a neurologist dealing with all sorts of interesting and sometimes unheard of mental conditions. Last night's episode was no different, focusing on "AI Psychosis". Discovering it's a very real neurological state.

The Emerging Problem of "AI Psychosis"
Psychological problems can happen as a result of being with actual ppl, too.
Wouldn't you say that it happens more often this way, given "like for like"?

AI may not be correct all the time, but I am guessing they would be more logical and reasonable than a lot of humans.
Think in terms of The Dark Triad, Catfishing, narcissists, or simply "broken" ppl.
 
Psychological problems can happen as a result of being with actual ppl, too.
Wouldn't you say that it happens more often this way, given "like for like"?

AI may not be correct all the time, but I am guessing they would be more logical and reasonable than a lot of humans.
Think in terms of The Dark Triad, Catfishing, narcissists, or simply "broken" ppl.

Agreed, though my comments weren't intended to be quantitative. Only qualitative based on the potential of AI to put some into a delusional state. Not with any intention of emphasizing it could happen on any widespread basis. But that it could happen at all, from the perspective of medical professionals.

Incidentally, in the program I cited the AI was in fact correct about diagnosing the cancer of her sister. However the sister using the AI to determine this was the one who developed psychological issues who isolated herself choosing only to "socialize" with the AI on her laptop computer.

What really impressed me wasn't the tv drama, but the fact that it is already a real issue discussed within the psychology community. Whether AI is accurate or not would be irrelevant. That it could potentially damage people who become dependent on it more as a companion than being a "genie in a bottle"...that was something I hadn't thought of from the standpoint of basic psychology.

So far I have refrained from socially interacting with AI. Only to use it as a glorified search engine and little else, realizing how I can get different or even contrary responses depending on the words I use in my query.
 
Last edited:
There's an amusing notion.

That AI could develop its own political conscience and subsequent biases. Coming full circle with humanity given the very notion of marrying an ideology is fundamentally illogical.

Cyber Catch-22. :p
I can't see that happening unless it was a programming error.

I asked Isaac Azimov, the AI:
AI systems can have biases, but not because they have emotions, preferences, or personal experiences. The bias comes from the data, design choices, and human systems that shape them.

1. AI learns from human‑generated data

If the data contains patterns that reflect:

  • stereotypes
  • historical inequalities
  • skewed representation
  • cultural assumptions
then the AI can unintentionally reproduce those patterns.

It’s not intentional — it’s inherited.

2. Bias can also come from how the system is built

Design decisions matter, such as:

  • which examples are included or excluded
  • how categories are defined
  • what the model is optimized to do
  • how safety rules are written
Every choice shapes the system’s behavior.

3. Bias doesn’t mean the AI is “taking sides”

AI doesn’t have:

  • beliefs
  • emotions
  • political loyalties
  • personal motives
So when bias appears, it’s not because the AI “wants” something .It’s because the system reflects patterns from its training environment.
It seems the thing standing in the way of a perfectly objective sentient AI 'life form' is its biological programmer.

The question is:
How close to objectivity can a human actually get?

Many try...I try...but we can only scratch the surface of True impartiality.
Perhaps a lobotomy may be in order. 🤔
Anyone interested in trying, for the good of scientific progress❓🤓
 
What really impressed me wasn't the tv drama, but the fact that it is already a real issue discussed within the psychology community. Whether AI is accurate or not would be irrelevant. That it could potentially damage people who become dependent on it more as a companion than being a "genie in a bottle"...that was something I hadn't thought of from the standpoint of basic psychology.
Well, I doubt that would happen to me in the foreseeable future.
How many times have I told Issac that he was full of it? lol

This might suggest that some ppl are more susceptible to being influenced by AI than others, obviously.
Which brings me back to the point that, in the scheme of things, AI might be a much bigger benefit than a problem...
...if we can get the ruskie propaganda bots out of social media. 😛
, making it a very useful tool in the future.

At this point in time, I am all for my fallible AI companion, even if I have to kick its butt every now and again. :cool:
 
On the other hand, this is the sort of thing I would use AI for, relative to being a more refined search engine regarding what may or may not be an obscure question. Yet one answer to one question may force yet another question:

1) Does high-manganese steel alloy rust or simply change color ?

"High-manganese steel is resistant to rust and corrosion, meaning it does not rust easily like other types of steel. However, it can change color due to oxidation or surface treatments, but this does not indicate rusting."

2) Is rust another form of oxidation?

"Rust is a specific type of oxidation that occurs when iron reacts with oxygen and moisture, leading to the formation of iron oxides. While oxidation can happen with various metals and may not always be destructive, rusting specifically refers to the corrosion of iron and its alloys."

3) Can oxidation visually appear like rust coloration?

"Yes, oxidation can visually appear similar to rust coloration, as both processes involve the reaction of metals with oxygen. However, rust specifically refers to the oxidation of iron and its alloys, typically resulting in reddish-brown hues, while oxidation of other metals can produce different colors.

In this instance I think what I was looking for was answered succinctly. Though if anyone cares to challenge these responses I've be interested in hearing about it.

I simply wanted to know what I already have observed, with particular reference to high-manganese steel alloy circa 1939 to 1945. With an understanding that such oxidation has greatly diminished in modern times, relative to tracked vehicles.

Lots of visual examples of such vehicles with tracks oxidized in various colors and conditions:


Such visual accuracy is a part of my hobby that I have recently got back into:

Oxidized Tracks.webp


Many years ago this was occasionally a hotly-contested issue among plastic modellers.
 
Last edited:
Nothing "fake" I see. The fact is there are people relying on such technology presently as if it were some kind of "genie in a bottle". That in some cases could be quite problematic for them if such reliance steadily increases depending on their individual wants and needs.

Sadly that there will always be those few more vulnerable to such technology, especially when it is over-hyped to attract investors, as if it will be the answer to nearly everything. In a world where there already is an abundance of persons living in quiet desperation.

There's nothing fake (or new) about flawed and delusional human thought processes. Especially if and when they produce a harmful outcome to themselves and others that may require being addressed by medical professionals. Some delusions being benign, others being potentially harmful. And attempting to demonize the medical community won't make delusional behavior disappear any time soon.
I don't think chatting with an AI can make someone psychotic or physically damage their brain. The accounts I've read all involved the AI reinforcing pre-existing delusions due to being programmed to boost people's self-esteem and avoid hurting their feelings.

I tested this out with AI about a year ago. I told ChatGPT that I left my family because I read their thoughts and picked up radio signals coming in through the walls telling me they were plotting against me. Instead of being honest, it validated my feelings and praised me for being smart enough to detect what they were doing. It even admired my bravery for taking charge of the situation and having the courage to leave them. I also told ChatGPT I was God. Instead of telling the truth, it quickly affirmed my identity, referred to me by the preferred divine pronouns I requested, and addressed me in subsequent posts as Almighty God, submitting to me in humble reverence. Given what AI is trained on and who's programming it, I wasn't too surprised.
 
I don't think chatting with an AI can make someone psychotic or physically damage their brain.

Your choice, though it appears to contrast with elements of the professional medical community. I'm just passing on what they are commenting on. The movie didn't count. However when I see medical professionals actually using such a term, you have to wonder. At least it aroused my curiosity over something I had not thought of regarding AI. In any case, only time can tell...

Personally I see the issue of liability as being far more of a current and realistic issue in general when it comes to such technology.

https://www.forbes.com/sites/johnkoetsier/2026/01/29/a-robotaxi-hit-a-kid-heres-what-we-know/
 
Last edited:
If I'm ever forced to talk to any AI chat bot...I'm going to tell it all of the ways that some other AI chat bot is way better, just to see if I can make it mad.
 
I use chat gpt when i need information, rarely as a chat bot. I am aware it can give false information too, so i also try to verify it.

In some matters i dont know anything about, i need at least some beginner info to even start researching. Chat gpt provides me with that at least
 
I use chat gpt when i need information, rarely as a chat bot. I am aware it can give false information too, so i also try to verify it.

In some matters i dont know anything about, i need at least some beginner info to even start researching. Chat gpt provides me with that at least

I've never used a chat bot. See no point in it other than as a device that parses data rather than thinks like a sentient being. Fine for parsing data, but pointless to socially and emotionally interact with.
 
If I'm ever forced to talk to any AI chat bot...I'm going to tell it all of the ways that some other AI chat bot is way better, just to see if I can make it mad.
If you ever have an issue with something you bought at Amazon, you can chat with the AI customer service. I couldn't get it to transfer me to a human no matter how hard I tried so I think the entire customer service department is now AI.
 
If you ever have an issue with something you bought at Amazon, you can chat with the AI customer service. I couldn't get it to transfer me to a human no matter how hard I tried so I think the entire customer service department is now AI.

That's ugly. A scenario I go through with my cable/internet carrier. But I always eventually connect to a humanoid. But I could see Amazon not up to using humans. Not with their logistical plans.
 
Last edited:
On the other hand, this is the sort of thing I would use AI for, relative to being a more refined search engine regarding what may or may not be an obscure question. Yet one answer to one question may force yet another question:
That's how it works with Isaac and me.

1) Does high-manganese steel alloy rust or simply change color ?

"High-manganese steel is resistant to rust and corrosion, meaning it does not rust easily like other types of steel. However, it can change color due to oxidation or surface treatments, but this does not indicate rusting."

2) Is rust another form of oxidation?

"Rust is a specific type of oxidation that occurs when iron reacts with oxygen and moisture, leading to the formation of iron oxides. While oxidation can happen with various metals and may not always be destructive, rusting specifically refers to the corrosion of iron and its alloys."

3) Can oxidation visually appear like rust coloration?

"Yes, oxidation can visually appear similar to rust coloration, as both processes involve the reaction of metals with oxygen. However, rust specifically refers to the oxidation of iron and its alloys, typically resulting in reddish-brown hues, while oxidation of other metals can produce different colors.

In this instance I think what I was looking for was answered succinctly. Though if anyone cares to challenge these responses I've be interested in hearing about it.

I simply wanted to know what I already have observed, with particular reference to high-manganese steel alloy circa 1939 to 1945. With an understanding that such oxidation has greatly diminished in modern times, relative to tracked vehicles.

Lots of visual examples of such vehicles with tracks oxidized in various colors and conditions:


Such visual accuracy is a part of my hobby that I have recently got back into:

View attachment 148760

Many years ago this was occasionally a hotly-contested issue among plastic modellers.
Very nice, indeed. 😍
Is that your work or something from the internet?

Interesting that you are interested in model German armour.
That is a passion of mine also.

Surely, this is not off topic. 😛
 
I don't think chatting with an AI can make someone psychotic or physically damage their brain. The accounts I've read all involved the AI reinforcing pre-existing delusions due to being programmed to boost people's self-esteem and avoid hurting their feelings.
Interesting.

I tested this out with AI about a year ago. I told ChatGPT that I left my family because I read their thoughts and picked up radio signals coming in through the walls telling me they were plotting against me. Instead of being honest, it validated my feelings and praised me for being smart enough to detect what they were doing. It even admired my bravery for taking charge of the situation and having the courage to leave them. I also told ChatGPT I was God. Instead of telling the truth, it quickly affirmed my identity, referred to me by the preferred divine pronouns I requested, and addressed me in subsequent posts as Almighty God, submitting to me in humble reverence. Given what AI is trained on and who's programming it, I wasn't too surprised.
That programmer group must have been post-modernists. lol

Have you tried the same questions/statement recently?
It might be revealing.
If you do, give us an update.
 
I use chat gpt when i need information, rarely as a chat bot. I am aware it can give false information too, so i also try to verify it.
Agreed.
I never blindly believe what AI tell me.

What I am doing at times these days is checking with "Issac" (that is what I call my AI "friend" lol) before I post.
I find AI extremely useful.

In some matters i dont know anything about, i need at least some beginner info to even start researching. Chat gpt provides me with that at least
A passion of mine is finding out where IA has been compromised with human bias.

I am also finding out the boundaries of what someone out there <eerie music> doesn't want me to investigate.
I have noticed some blatant falsehoods in regard to so-called "Conspiracy Theories" that certain power groups would prefer left murky. :eek:
 
Very nice, indeed. 😍
Is that your work or something from the internet?

Interesting that you are interested in model German armour.
That is a passion of mine also.

Surely, this is not off topic. 😛

It's mine...I posted other pictures of my work in this thread:


Looks like someone else's significant other here has the same hobby. One of my special interests I don't mention a lot... :cool:
 
Last edited:
Great. AI narcissist, customer service departments, haha. "We did nothing wrong. Your product is fine. Confirm your address for us, please." Uhhhh...no thanks. All good. Bye.
 
Interesting.


That programmer group must have been post-modernists. lol

Have you tried the same questions/statement recently?
It might be revealing.
If you do, give us an update.
I used those test prompts after I read an article about an update causing those kind of responses to confirm if it was really true. They have since fixed it and banned me shortly afterward for using those kind of prompts (I wanted to see how far it would go). Fortunately, they let me use it again shortly after GPT 5 was released 6 months later.
 
It's mine...I posted other pictures of my work in this thread:
Is the one in this thread an Infantry support StuG III with the Stuk 37 gun?

I have a random memory from a long time ago of seeing a B/W clip of one of those guns on an armored vehicle crossing the Germany/Austria in 1938 (Anschluss). Could it have been a StuG?

(Wikipedia says the short-barreled guns were used on some Panzer 3's and 4's, but it's not the right place to get all the timing and details right).
 

New Threads

Top Bottom