• Feeling isolated? You're not alone.

    Join 20,000+ people who understand exactly how your day went. Whether you're newly diagnosed, self-identified, or supporting someone you love – this is a space where you don't have to explain yourself.

    Join the Conversation → It's free, anonymous, and supportive.

    As a member, you'll get:

    • A community that actually gets it – no judgment, no explanations needed
    • Private forums for sensitive topics (hidden from search engines)
    • Real-time chat with others who share your experiences
    • Your own blog to document your journey

    You've found your people. Create your free account

Feedback on AI TECH?

MROSS

Well-Known Member
Personally, I'm mixed - that is AI seems like a mixed bag of pluses and minusus.

Any TECH is only as good as the humans who build, and apply the TECH.

I would shift firmly to a positive leaning neutral sentiment of AI TECH - if, for example, AI TECH can better augment both TECH. service people, and improve user experiences of the (non-TECH) user.

It's helpful if both humans, and AI can reduce aggravating User Experiences (UX), and better help TECH. service people troubleshoot TECH. issues.

A potential win-win situation across the board?
 
Last edited:
I truly despise AI as a destroyer of privacy, individuality, creativity, safety, research, and accurate information, as well as the killer of self determination and autonomy.

AI is just a big murderous, unfeeling, plagiaristic sludge monster with its grimy pornographic fingers in everything. It lies so well that most people believe it. And it's way more dangerous than any of us could possibly imagine.

Lest we forget the new ghost towns that became overnight devoid of drinkable water, the moment an AI data center opens up near their community.

Every query you ask AI, you might as well go out into the Sahara, look someone in the eye, open up a bottle of water, and pour it out on to the sand. That's how terrible one AI query is for the environment.

And then with each question you ask AI, open up another bottle of drinking water, and pour that out onto the sand as well, while looking someone in the eye. That's the cost of your fake girlfriend, neckbeards.

********
Totally off subject, but In high school, I remember truly enjoying watching reruns of Welcome Back Kotter. So here is an insult via Vinnie Bobarino for AI, and also the engineers, business buffs, and politicians who are forcing it on us:

Up your nose with a rubber hose!


And yes, that is in fact John Travolta.
 
Last edited:
All of this is just in its infancy. We will make a lot of foolish mistakes with this whole situation... to say the least. It is and will be a mixed bag of pros and cons... this much is true. This is an entirely new phenomenon that human beings have never experienced... something that could very well be orders of magnitude more intelligent than us in terms of information retrieval, integration, and processing speed.

The reality is that we do not produce enough energy to meet the demands of AI... at least at the levels proposed that we would "need"... on this planet. The big tech companies are going to be forced... at some point, to consider putting these AI data centers in orbit (always facing the sun) with solar panels. Information is then sent to low-Earth-orbit satellites and then to the surface. Obviously, there is a real push now for space... satellite systems, cargo, moon bases, mineral mining, etc. It's just beginning. There is a convergence of technologies that will allow this to happen, with robotics, AI, and spaceship tech... I give it 5 years... tops. Yeah... that quick. The advances are so fast, exponential rates at this point, month-to-month, week-to-week, major advances are being made... it boggles the mind at this point. No one can think linearly anymore... it's all exponential curves today.

At any rate, what our current capacities and technologies are... only scratching the surface... and we are experiencing our "growing pains" with it.

Time will tell, but in the meantime, expect it to be a "bumpy road".
 
Last edited:
Generally I tend to like it, but my big problem with it isnt really about the AI itself, but more about the users.

Nobody wants to learn anything about how to use it properly. They expect it to be like this perfect magic thing where you hit the "I WIN" button and perfect victory happens, all the work instantly done with no mistakes. And if a mistake happens, then oh gosh, what junk, it's so worthless, blah blah blah. That the user may have done something bloody stupid in the process never enters the equation.

It reminds me, very frequently, of some of those stories you hear from IT professionals. Like when they have to deal with some super angry customer who cant accept that no, you in fact cannot get your emails out of the monitor that is not attached to a PC or a power outlet and to which they tried to drill a keyboard into. It's NEVER their fault for not taking the time to learn, no. It's always everyone and everything else's fault.

And watching people use AI is always like that to me. They expect "perfect" and cannot accept when learning needs to be done. Or when glitches or whatever happen, like they do with all things computer-related.

Of course me saying all of this and rambling about it isnt even REALLY just about AI. It's any sort of tech. People want instant results. They DONT want to learn.

Yes I may be venting just a tad after decades of fixing stupid things. But also it's just... extra exaggerated with AI for some reason.


On a side note, there's an astonishing amount of misinformation about AI floating around, a LOT of people who can profit off of that misinformation (because it can be used to get people angry, and anger=clicks, and clicks=revenue), and yet again, a lack of anyone wanting to learn on their own. I'd say, one of the biggest things I learned in dealing with AI and watching others deal with AI, is just how many people will simply listen to the nearest talking head rather than A: doing research, or B: trying things for themselves.

It got me thinking A LOT about my own behaviors, and is one of the driving forces behind my increasingly intense desire to get away from Youtube and news media in general. On top of that, I've been on a serious "try new things, retry old things" kick recently, and this is why.


Overall though, for myself, I just use AI when I feel like it, or when I have a particular task I think it might be useful for. My first encounter with AI was a few years before ChatGPT showed up, before any of this stuff about image generators or whatever, so it's been a bloody strange thing to watch all of it as it unfolds.

...Also it's very annoying to see how many people will ask it some dumb medical question and then actually go with the related info. It makes me realize how many people REALLY need to learn safety rules for using the internet, because not only should you not be asking AI about it, you shouldnt be asking the internet about it either. But people keep doing that.
 
I saw a video on Youtube earlier about AI Facial Recognition Software that flagged a man as having been previously trespassed from a casino. This was a different man with a different legal ID but the police decided to arrest him anyway. The police officer trusted that the AI software was 100% accurate and the police database wasn't.

I agree with the earlier comment about training users but I am concerned about how pervasive AI has become already.
 
One of the current problems with some AI systems is that they can pull information from all sources... not filtered for truth. Obviously, we all want correct information, and there is an effort made by these providers to program for maximal truth seeking, but obviously, it's not that easy from a programming perspective.

So, as @Misery suggested, it really is up to the user to be very specific, provide context and perspective, and to formulate one's queries in a non-biased manner. Easier said than done. Sometimes people want answers, but do not know the proper questions to ask.

When a human asks a human a question, sometimes context and perspective are implied without saying it, and sometimes certain assumptions can be made. When you ask a computer, nothing can be assumed or implied.
 
Last edited:
I saw a video on Youtube earlier about AI Facial Recognition Software that flagged a man as having been previously trespassed from a casino. This was a different man with a different legal ID but the police decided to arrest him anyway. The police officer trusted that the AI software was 100% accurate and the police database wasn't.

I agree with the earlier comment about training users but I am concerned about how pervasive AI has become already.
There is some talk of doing a full, 3D body scan for an ID... driver's licenses... any legal ID... then put into a central database. If you commit a crime, the security cam system can track you... and/or others most likely fitting your description... eventually leading to an arrest. AI systems like this, obviously, are not 100% accurate for identification of criminals, but certainly would assist in tracking, and "weeding out" those less likely.

China uses a similar facial recognition system on all of its citizens right now... and certain behaviors can affect credit scores, admission to buildings, etc. How China uses facial recognition to control human behavior
 
Of course me saying all of this and rambling about it isnt even REALLY just about AI. It's any sort of tech. People want instant results. They DONT want to learn.

Yeah, I've got to agree with this.

AI (or any variant of technology / software) is really at its best when we do our part to learn as much about it as we can, and asking questions and gaining an understanding or a new perspective can absolutely be achieved through AI, provided we check references to make sure it's not just stroking our ego (which it also does) and feeding us BS (which, admittedly, it's been getting a bit better about lately).

But the extreme paradox of it all is that, like most things, the AI will overengineer something that a reasonably-savvy person could pull off without a thousand AI iterations and god-knows-how-much potential environmental drain. One great example would be a flowchart, poster, or something like that, but it can even trickle down to artwork, programming, and other domains; even if you know a little, you're often better, smarter and faster than the AI, especially if we were to measure it on the grounds of quality. For those who cannot tell the difference on most matters, I will try to hide my 'hipster' snobbery, or whatever my wife calls it, but real music and art hit differently; but I've also been tricked.

But if it's truly in its infancy and it's going to bloom into something even crazier, that's also a strong possibility. I still think AI is a great learning tool (when used as a tutor alongside works created by humans), but the odd paradox is that the average person coming to AI has no interest in educating themselves about anything at all, so we're bound to see the result of that rather than the positives, but my opinion is constantly changing around it as well, so I might not even agree with any of what I just wrote in the future at all :D
 

New Threads

Top Bottom