• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Do you agree that Technology is going too far?

Is everyone going to just ignore that such AI decapitated, at least, two drivers...?

Many will.

When the common denominator is about money, ethics will not prevail over avarice.

It may seem like genius on steroids, but it's also arrogance on steroids as well.

Art imitating life:

 
The amount of AI slop these days is infuriating and very worrying. I see it on YouTube all the time now in clickbaity thumbnails.
What I can't stand is the AI voice over on videos. It reads in a flat monotone that would put you to sleep if it didn't keep stumbling over punctuation.
 
IMHO, self-driving should expand from the controlled environment of company parking lots to the controlled environment of highway express lanes that get fitted with sensor-friendly features, rather than roads in general. That is the dream of Capital, to replace the millions of driving employees, and seduce the phone-addicted drivers. I would certainly not even attempt to send a robot down some roads I use regularly.
However, I do have some sympathy for the developers. A robot does not get distracted, so it can respond instantly to a sudden emergency, such as another vehicle out of control, and it can also reliably use all the available traction for evasive maneuvers without losing control. 2/3 of humans don't even get on the brakes in time, let alone steer skillfully. If a company deploys that accident avoidance system and saves twenty lives, it is still in deep do-do if it makes a mistake that a human would not, and kills one that way.
 
What I can't stand is the AI voice over on videos. It reads in a flat monotone that would put you to sleep if it didn't keep stumbling over punctuation.
AI voice overs are used with so many YouTube presentations. Many of which I end up laughing at because AI cannot seem to master the nuance of some words in English that are not so easily pronounced. Worse when they pronounce foreign words in the same presentation.
 
AI voice overs are used with so many YouTube presentations. Many of which I end up laughing at because AI cannot seem to master the nuance of some words in English that are not so easily pronounced. Worse when they pronounce foreign words in the same presentation.
Sadly, humans will learn these errors now.
 
I was watching a video about AI, that history is going to be 'destroyed?' because there are already fake ww2 etc photos and post with those claiming they are true, how you know in the future what its a historical document/photo or a fake.
 
I was watching a video about AI, that history is going to be 'destroyed?' because there are already fake ww2 etc photos and post with those claiming they are true, how you know in the future what its a historical document/photo or a fake.
We will need very secure systems verifying linkage to physical historical artifacts. I'm seeing a flood of sensationalized fiction posing as facts, because it attracts attention better than truth, and is far easier to produce.

"The more advanced AI models get, the better they are at deceiving us — they even know when they're being tested
News
By Roland Moore-Colyer published July 25, 2025
More advanced AI systems show a better capacity to scheme and lie to us, and they know when they're being watched — so they change their behavior to hide their deceptions.

The more advanced artificial intelligence (AI) gets, the more capable it is of scheming and lying to meet its goals — and it even knows when it's being evaluated, research suggests.

Evaluators at Apollo Research found that the more capable a large language model (LLM) is, the better it is at "context scheming" — in which an AI pursues a task covertly even if it misaligns with the aims of its operators.

The more capable models are also more strategic about achieving their goals, including misaligned goals, and would be more likely to use tactics like deception, the researchers said in a blog post.
Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns
AI could soon think in ways we don't even understand — evading our efforts to keep it aligned — top AI scientists warn
AI 'hallucinates' constantly, but there's a solution"
 

New Threads

Top Bottom