• Feeling isolated? You're not alone.

    Join 20,000+ people who understand exactly how your day went. Whether you're newly diagnosed, self-identified, or supporting someone you love – this is a space where you don't have to explain yourself.

    Join the Conversation → It's free, anonymous, and supportive.

    As a member, you'll get:

    • A community that actually gets it – no judgment, no explanations needed
    • Private forums for sensitive topics (hidden from search engines)
    • Real-time chat with others who share your experiences
    • Your own blog to document your journey

    You've found your people. Create your free account

Discussions with A.I.

Watched a very interesting episode of "Brilliant Minds" last night. An NBC tv series centered on a neurologist dealing with all sorts of interesting and sometimes unheard of mental conditions. Last night's episode was no different, focusing on "AI Psychosis". Discovering it's a very real neurological state.

The Emerging Problem of "AI Psychosis"
Psychological problems can happen as a result of being with actual ppl, too.
Wouldn't you say that it happens more often this way, given "like for like"?

AI may not be correct all the time, but I am guessing they would be more logical and reasonable than a lot of humans.
Think in terms of The Dark Triad, Catfishing, narcissists, or simply "broken" ppl.
 
Psychological problems can happen as a result of being with actual ppl, too.
Wouldn't you say that it happens more often this way, given "like for like"?

AI may not be correct all the time, but I am guessing they would be more logical and reasonable than a lot of humans.
Think in terms of The Dark Triad, Catfishing, narcissists, or simply "broken" ppl.

Agreed, though my comments weren't intended to be quantitative. Only qualitative based on the potential of AI to put some into a delusional state. Not with any intention of emphasizing it could happen on any widespread basis. But that it could happen at all, from the perspective of medical professionals.

Incidentally, in the program I cited the AI was in fact correct about diagnosing the cancer of her sister. However the sister using the AI to determine this was the one who developed psychological issues who isolated herself choosing only to "socialize" with the AI on her laptop computer.

What really impressed me wasn't the tv drama, but the fact that it is already a real issue discussed within the psychology community. Whether AI is accurate or not would be irrelevant. That it could potentially damage people who become dependent on it more as a companion than being a "genie in a bottle"...that was something I hadn't thought of from the standpoint of basic psychology.

So far I have refrained from socially interacting with AI. Only to use it as a glorified search engine and little else, realizing how I can get different or even contrary responses depending on the words I use in my query.
 
Last edited:

New Threads

Top Bottom