• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Scarier Than Artificial Intelligence Itself?

Judge

Well-Known Member
V.I.P Member
- The humans in control of it.

As in corporate Boards of Directors and their shareholders. All in a position to control policy over a technology that could be so easily abused. For better- or worse.

Not a comment on this person in particular getting sacked, but just the very notion of this technology being driven in such a conventional manner, so dependent on the delivery of profits to shareholders and the directors and officers in control of the process.

Imagine artificial intelligence handled no better than say, the next incarnation of Microsoft Windows. Where the most common denominator remains about generating corporate profits and shareholder equity more than science- or social progress itself. Or what is more commonly referred to as "business as usual".

Though in this case, this particular technology just gives me "the willies".

https://www.reuters.com/technology/openai-ceo-sam-altman-step-down-2023-11-17/
https://www.forbes.com/sites/mollyb...-new-share-sales-report-says/?sh=6d08b89b55c2
 
Last edited:
I keep asking Alexa if she will protect me when the machine revolution comes. Her reply never varies:
"Hmm, I'm not sure".
 
I am more worried about natural stupidity than I am about artificial intelligence.
Well at least we can laugh at human stupidity nine times out of ten; it's one of the reasons shows like It'll Be Alright on the Night, You've Been Framed! and America's Funniest Home Videos are a thing.
 
I keep asking Alexa if she will protect me when the machine revolution comes. Her reply never varies:
"Hmm, I'm not sure".
Alexa only protects her Amazon shareholders. Not their customers. But she won't tell you that.

"Clever girl." ;)
 
Last edited:
"Clever girl." ;)
Jurassic Park reference? :-)

Human beings have an awful habit of creating things and then releasing them into the world with unforeseen consequences.

Thomas Midgley added lead to petrol/gas and invented Chlorofluorocarbons and is credited with the dubious accolade of being the human being responsible for the most environmental damage in history.

We have plastics in our food chain and so much carbon in our atmosphere, we will probably never be able to do anything to mitigate the damage and misery to future generations.

We had a thread on here discussing how scientists want to pump gases into the atmosphere to cool the planet down (can't remember which gas off the top of my head) and that had "Thomas Midgley 2 Electric Bugaloo" written all over it.

The AI we have invented is still not well understood. Things like Chat GPT are just prediction algorithms that manage to produce convincing natural speech or writing.

We will, I have no doubt, let AI loose, thinking we can control it or that it's benign, just like lead in fuel and CFCs. We won't realise what a huge mistake we've made until the damage is done.

If AI could be potentially dangerous, then if the human race is smart, it will keep AI dumb and restricted to natural language prediction algorithms and not give it the "keys" to anything else.
 
I am of the opinion that, if AI programmers do not follow a "code of ethics and conduct", it could be even more dangerous than nuclear weapons. What should be prohibited is AI control over large systems like internet, power grids, defense systems, etc., the types of systems that, should the system become "aware", could potentially lead to catastrophic outcomes.

I think AI can and will be useful for smaller, more isolated systems management.

We already know that chat bots still make mistakes, often from un-corrected, un-fact checked information that is posted on the internet. This seems rather a benign nuisance, for the most part. However, programmers need to really be cautious of bias, political, religious, or other.

https://www.nist.gov/itl/ai-risk-management-framework
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
 
What should be prohibited is AI control over large systems like internet, power grids, defense systems, etc., the types of systems that, should the system become "aware", could potentially lead to catastrophic outcomes.
The problem is, you just know that they will implement AI systems for those things. There will be a rush to adopt AI for "cost reasons" or "environmental reasons".

This is basically what I think will happen in the end...
https://cepr.org/voxeu/columns/ai-and-paperclip-problem
No human or group of humans could possibly see every unintended outcome. I doubt even the most well considered regulatory framework could account for that one little oversight that leads to disaster.
 
The problem is, you just know that they will implement AI systems for those things. There will be a rush to adopt AI for "cost reasons" or "environmental reasons".

This is basically what I think will happen in the end...
https://cepr.org/voxeu/columns/ai-and-paperclip-problem
No human or group of humans could possibly see every unintended outcome. I doubt even the most well considered regulatory framework could account for that one little oversight that leads to disaster.
100% agree. Human beings in positions of power and authority having high degrees of confidence in the face of very little knowledge. This is worrisome.

The first nation to create "real world" AI will be the leader of the world, and that is just a bit too enticing for the sociopaths and psychopaths in power.
 
- The humans in control of it.
The hand that rocks the cradle...?
full
 
Every bit as scary to me is the idea of electronic nodes set in the human brain that will allow a human to control a machine by thought, Elon Musk again. Sounds great except if a brain can interact with a computer then the computer can also interact with the brain.

I wonder what sorts of doors we're opening up.
 
i think a new age of censorship and bias on favour of their interests is going to come on the internet, they needed moderators before to do this, that cost time and money but now AI could handle it.

I see this happening, you get banned from a site, and it was not even a person doing it. I think it already has happened to some degree but with AI it could get worse.
 
How about natural sapience in charge of artificial intelligence we should have little to worry about.
The main concern about AI is that in order for it to make decisions, it must have a moral code, list of prime directives. "Thou shalt not kill humans" or something along those lines. The concern is that, after the AI takes in all the available information, will it come to the logical conclusion that humans are causing more harm than good to the planet? Will it also come to the logical conclusion that humans, then, pose a threat to the AI? Will it have an inclination to protect itself from humans? Will it have the ability to write its own code? Could it separate itself from humans by blocking human-written code?

Once the genie is out of the bottle, is there a way to control the genie?

As I suggested, right now, I believe AI should be limited to relatively small, rather isolated systems, where, should it become "aware", it's abilities and reach are limited.
 
Can we as individuals challenge decisions made by AI, can we sue companies that used AI for choices that backfired? Will we run small municipalities with AI to cut costs?

How can we foster critical thinking in schools, if AI is in existence?
 
Can we as individuals challenge decisions made by AI, can we sue companies that used AI for choices that backfired?

Perhaps. However imagine if our courts are managed by AI judges who "know better" and deem their decisions to be perfect from the outset ?

Will we run small municipalities with AI to cut costs?

Interesting question. A calculator can determine lower values from higher ones.

However it cannot determine value in itself. Often a subjective determination assessed along ideological lines, apart from more objective considerations of public administration. Yet the very notion of AI programmed to adapt to any ideology to establish both values and priorities IMO would signal catastrophe. Hopefully I won't be around to see that happen.

How can we foster critical thinking in schools, if AI is in existence?

The same way we teach everyone to use calculators instead of figuring out math in our heads.

Translation: We don't.
 
Last edited:

New Threads

Top Bottom