• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Do you agree that Technology is going too far?

I was just reading about quantum computers, and they are seriously called such because they are moving so fast, they fit the specifications for dimensional travel per how they collect data.
 
I was reading this article but I have a clear conscience not in my name I am stating I am against AI going too far:-

 
I was reading this article but I have a clear conscience not in my name I am stating I am against AI going too far:-

Musk's self-driving cars require a level of AI in order to work.
 
Creepy as hell.
Really plz Christians pray against this, this is so creepy.
I mean what make many and then they take over and kill us.
This is just taking things too far.
 
I hate the internet. If my daughter didn't need it for homeschool, I'd smash the computers and router. I have kerosene lamps packed away and a cozy fireplace. Those are real. All this virtual stuff makes no sense in my autism. Yes the flashy games catch my attention and hypnotise me, but I can do that with a deck of cards too. What I want is reality.
You have such a good mindset.
It has become far too much an addiction yes easy to do when you struggle with connection
But good to unplug and break the addiction and do real things.
 
Elon Musk should never be considered an authority on anything.

I remember I saw a video where he blithered on about trying to warn that AI was dangerous and complained that we needed to apply a "safety belt". Good car analogy I guess, but very strange that he's perfectly happy to put his dire failure of a "dangerous" AI literally in control of a car you can buy right now.

AI does present lots of problems, but never trust Elon Musk, the only reason why he would like AI to be regulated is so he can personally dictate the parameters in a way that benefits him.
 
I agree only that people's use of technology has gone too far; not that technology itself is too far advanced.

I am no Luddite.
 
I hate the internet. If my daughter didn't need it for homeschool, I'd smash the computers and router. I have kerosene lamps packed away and a cozy fireplace. Those are real. All this virtual stuff makes no sense in my autism. Yes the flashy games catch my attention and hypnotise me, but I can do that with a deck of cards too. What I want is reality.
I like being able to connect to other "odd ducks,*" °°< which is exceedingly difficult in meatspace.
*A duck might be a better logo for autism than the current "puzzle" piece...
full
 
As @Cryptid said, it’s all in how you use it. It was bad enough when they put computers in charge of utilities billing. But when they put computers in charge of the utilities themselves, they really blew it. AI can never harm us until humans put it in charge of something.
 
Thing is though. This is not smart technology. It is all programmed. It might seems smart but on the scale of AI being smart and advanced we are just at the beginning. If we were to place it on a 1-10 scale we would be at a 2 maybe 3 at tops.
 
There is no particular "too far" for technology, just as there is no theoretical minimum energy needed for a round trip. The relative "too far" is about the relationship of humans to technology. Here's a nice 12 minute TED talk on how AI can be made safe.
Any casual analysis will reveal that there are not even problems with technology being developed - the problem comes in when the billionaires direct the development of the ones it to increase their wealth and control, without regard to long-term consequences, or the interests of poorer people.
 
I was reading this article but I have a clear conscience not in my name I am stating I am against AI going too far:-

My only other online presence is Quora (some of you have probably heard of it). It is a forum where people ask questions and others respond. They have an AI question generator which seems to be asking half the questions lately, "In order to provide questions that people may want to ask in the future." Most of which are nonsense or self contradictory. Do they really want to put these in charge of anything?
 
There's nothing unethical about technology. I have the same opinion on drugs and guns. I don't blame the instrument of destruction of detriment.
Those who bear responsibility for damage to self or society are those who chose to use an instrument, be it a robot, bomb, or bag of drugs.
This idea can get very debatable when the idea of "free-will" comes into question. Conformance to law and societal norms covers a great deal of the bases here.

One of the biggest problems with technology isn't it's advancement, it's the manner of how it's advancing. Are you aware about how many programmers produce products? Those that develop programs and AI do so with a great deal experiential information with minimal if any meaningful understanding of how the inner mechanisms that produce the functions work as a whole or even as functional blocks. Imagine a self driving car that suddenly can't tell the difference between a tree and the middle of a road.

Programming is a difficult, time intensive process, that requires a lot of work to get done well. All manner of technology requires this high level of attention to detail and consideration to potential ramification. The rate at which any individual or product can impact society has accelerated tremendously in the past 30 years on account of the introduction of the internet. With that accelerated impact comes a tremendous amount of unforeseen ripple effects that are still largely misunderstood or intentionally ignored.

Ultimately, we the human race are responsible for any damage that technology does both as it's creators and as it's users. Again, I personally put the brunt of the responsibility on the users.
 
Yes, I hate it. They go on about AI destroying humanity but it's not AI, it's humans inventing AI. I mean, AI didn't just pop up out of nowhere. Some stupid genius came up with the idea.
The way I see it, the millennium bug really happened. Ever since the year 2000 (humans have let) computers take over and are slowly destroying humanity.
Why are we doing this to ourselves?
 
I doubt there is any actual "A.I." in that tin can, but rather like a semi-autonomous, mobile, camera system. There's probably a human operator in some office monitoring everything it does, and likely has some control over it.

That said, true, real-world, artificial intelligence isn't here just yet, but certainly will be very soon. We certainly cannot rely upon governments to regulate this area when these decrepit old white men are still trying to sort out what "the Facebook" is. The tech is on an exponential learning curve, coming a lot faster than anyone can really understand or believe. To be clear, true artificial intelligence is operating independently from human control, that is the nature of A.I, it can think for itself and make its own decisions. A.I. can be quite useful within the context of isolated systems, and if it does become dangerous, humans can simply push the "OFF" switch. A.I. can be dangerous within the context of controlling broad systems, such as infrastructure, defense systems, etc. The later is worrisome because this is where A.I. could be foolishly perceived as "most useful".
 

New Threads

Top Bottom