• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Will A.I. do most of the tech related computer jobs in the near future?

One final observation from me and then I will be silent ;).

There is still no AI program ever devised that can PASS the Touring test, although many have tried from the very start of computing. There is reason, because of that fact, to believe that milestone will not be reached in your lifetime.

For those who are not familiar, the Touring test is a game devised in 1950 by Alan touring, who devised a methodology for cracking the Enigma code during WWII. It is a game with two humans and one computer, the object being for the computer to fool the two humans so that they cannot determine which participant is a machine.
Who needs Turing. The Lobener prize was designed after the Turing test and year after year the best score wins, and some of their bots won over 64% which is marked as indistinguishable from a real person. In fact some of my former friends, competitors and winners in the prize were able to use their bots on skype and mimick human beings very well.

Coding is very time consumming for chatterbots, and it requires hours of extensive work which is what Steve keeps doing day in and out, and how he was able to win year after year. When you code them, you have to keep into account all the nuances of the language, which can be very troublesome in English, because a similar key word has multiple meanings depending on context, so not only do you add expression replies, and variation of replies, but you have to keep into account that all variations actually fit the personna you are trying to create, even if they are not meant to be human or look or feel like one. Secrets of making intelligence. And they require corrections on personal replies all the time, and of course not everyone will ask the question the same way.

I have spoken to Eliza, at least whats left of her coding, interesting and especially good for those beginning times when not so much was known. Nowadays there are more AI code types and programs than Eliza's, that seem to make the experience different, sometimes easier. To think that a precursor of CSS was Eliza's code, wouldve been hard enough to make most code from scratch.
 
@All-Rounder

Turing was a very smart guy, but he didn't know a lot about IT. This is because it was still being invented (what they had was more like programmable calculators).

The Turing test is interesting but it's not "fit for purpose".

It's worth remembering that back in those days, people (not necessarily including Alan Turing) thought that a computer playing a good game of chess would be an effective test for AI. It turned out that the best human players can't beat a modern PC, but the system that plays them is "as dumb as a box of rocks".
 
Last edited:
AI is doing a little too much it turns out. The schools here decided today that the kids will not have access to the internet during tests and exams because they are using things like ChatGPT to do the work for them. A girl let the ChatGPT AI write a paper for her about something. When she got it back the teacher had graded it 5. The best possible grade you can get is 6 and her machine-written paper got a 5.

My only question about all this, is why do the kids have internet access during tests?? When I was in school, we couldn't have a calculator or anything on our desk, nothing that could help us. That would be cheating. I want a do-over. So unfair. ;) It seems so strange that they have been using the internet while taking a test.

I've been interacting with ChatGPT over the last week. It's very interesting. It was debuted only a few months ago and is free to interact with in its learning phase right now. It's only knowledgeable about anything up to 2021 so it can't communicate about anything after that. However, the plan apparently, from what I understand is that the next phase will include giving it access to the entire internet and all knowledge which will include medical and science related info. Could AI help or even be the driving factor in finding cures for certain diseases? Likely I think.
 
Another ominous sign from Microsoft. That they are intending to incorporate a great deal of AI into their next operating system, Windows 12.

And you thought their hardware requirements for Windows 11 were strict? From what I hear, an AI-based consumer level OS would require the highest level of hardware. Likely meaning that a very large percentage of PC users would have to replace their computers, all over again. :rolleyes:

 
The company I work for (Fortune 50 or higher?) held off hiring employees at my level despite their crystal ball showing huge upcoming need. The reason? The major software vendors are all working on AI to replace me. Well, it's here. I've worked with it.

I love pancakes. Had some on vacation a couple years ago that I'm still thinking about. The best of the folks in my department are like that Idaho pancake :) The AI program, which is state of the art? Did you ever make yourself a hotcake and it was way underdone inside? Or, how about a cheap store-brand frozen pancake? One had promise but was poorly executed, while the other is just one shortcut too many. It's sort of like that. Our company gave up on it, for the most part, and is hiring as fast as they can, and will be for the foreseeable future. AI can do the most basic form of my job, but still needs me to go back and fix what it messed up. It's just as quick, quicker actually, for me to do the whole job.

Now, I was just thinking about this: I'm not in software or anything like that, I just use it. The future is for the folks who can create the programs that finally DO replace me! Look for the jobs that are going begging (in other words, the jobs that employers can never seem to fill) and seek out the solutions. We're supposed to be brilliant with ideas like that, so they say!
 
Ouch...looks like Google, Intel, Amazon, Microsoft and others just let a whole lot of employees go recently. Seems the economic tea leaves don't look so good any more.

 
I just took a quick look at a YouTude video showing use of ChatGPT for Java coding.
(BTW - I don't know or care if ChatGPT is described as "AI", but it looks quite interesting, and seemsrelevant to the OP)

It looks like a very useful (and powerful) productivity tool, but not a replacement for real development skills.

Some context:

Developers spend a ridiculous amount of time doing trivial things because the documentation and tooling is so bad. This is true for all development, but of course it's a small secrect in general, and there's a lot of denial inside IT /lol.
This causes a very large "startup threshold" effect. It takes a lot of time to start doing something new. Not unique to IT of course, but the cost is very hard for non-IT people to understand, because development is so abstract.

From the far past of modern IT (going back to the days of coding applications in assembler) forward, people have made steps forward in addressing this core problem. Progress has been steady (and we're talking about 40-50 years of continuous improvement), but slow.

ChatGPT doesn't seem to be able to create interesting new capabilities at all (this is what I expected though. so there may be an element of confirmation boas here :)

On the other hand it seems to provide a huge improvement in resolving the core problem I described above.

This is literally the first time I've seen something I thought would "change the slope" for the slow but steady continuous improvement process.

I've been watching this for a long time, and listened to many "generations" of lies about how some fancy new tech would revolutionize development productivity. I've taken advantage of many cool and useful changes, but they always take much longer to actually make things more efficient than claimed at first. They haven't really "changed the slope" - they just ensure productivity keeps improving all the time.

Until ChatGPT ... it's doesn't seem to be a revolution in capability, but I I'm starting to think it will have a significant big effect on productivity. Which OFC means lower costs for the same thing (OMG OMG developers will be fired :) but normally in IT it means that a bunch of previously impossibly expensive things become affordable, and new things are created. So productivity facilitates new capabilities, which may generate new kinds of work for people with the core skill set and talents that makes them good at development.

When you look back in time, productivity improvements are often disruptive at first, but don't necessarily reduce the total amount of human effort (employment) that makes economic sense.

The usual example in economics class when I was studying was transport: walking, horses, trains, cars, airplanes, with an obligatory discussion of the effect of internal combustion engines on the manufacture of buggy whips.

@Magna

If you read this.. I can't make a "go/noGo" recommendation, because there's clearly a risk for a disruptive/chaotic period.
But FWIW I didn't see any suggestion that ChatGPT will replace the most interesting parts of IT development, and I'm hopeful it will remove some of the least interesting and/or most annoying aspects..
 
Last edited:
I saw an interesting test, turns out it's easy to confuse that chatGPT AI-thing. Someone asked "what is 3 + 8?". The machine said "11". The person then said "no you are wrong, it's 12". And the machine said "I'm sorry, I made a mistake, you are right, 3 + 8 is 11". The person said "what? Are you contradicting yourself?" and then everything went downhill from there. People have to ask the right questions in the right way for it to work, it doesn't take much to confuse it and make it give you nonsense answers.

If you ask it to tell you more abut the importance polar bears have had in space research, things get weird fast. It said that polar bears were used to carry equipment and personel.
 
@Forest Cat

That's really funny :) I guess it's safe to say ChatGPT definitely isn't an AI :)

Way back, I remember reading about a bug in a FORTRAN compiler: it was possible to change the values of references to numbers (like saying "6 = 5") and messing up calculations.

TBH it was probably just made up over a beer and passed down over the years, but it's also a good reminder that the reference isn't the object (the coder's version of Philosophy's "The map is not the territory" :)
 
AI is extremely clever and improving all the time, but what we forget is that it can't make decisions, it can only respond with preprogrammed actions. So when confronted by something out of the ordinary it becomes useless. Admittedly I've met quite a few humans that are no better.
So, as you probably know, Tesla is developing an AI system for full-self driving and their new humanoid robot. Each of their vehicles is linked to their giant learning supercomputer "DOJO". So, as you can imagine, millions of vehicles, trillions of data sets, and then every few weeks, each vehicle is sent software updates based upon what was learned. Over time, each vehicle's software becomes further and further refined. However, Elon Musk has said that all of this has made the self-driving software better, but the biggest holdback is true AI, the decision-making based upon ethics. In other words, when faced with two or more decisions that could cause harm, which is going to be the least of the two evils? Keep in mind, the "prime directive" of the software is no collision. As the vehicle may be faced with a "no win" decision-making conundrum. For example, a vehicle crosses the center line in front of you. Your vehicle prepares for an evasive maneuver. Do I go into the ditch or off the cliff? Do I hit him head on? Do I swerve to the other side of the road and hit the bicyclist?

So, true AI is about making decisions in this context, and is why the delays in "true" self-driving technologies.

Click onto the 20:00 mark on the talk on AI, autonomous robots and vehicles.

 
Last edited:
So, as you probably know, Tesla is developing an AI system for full-self driving and their new humanoid robot. Each of their vehicles is linked to their giant learning supercomputer "DOJO". So, as you can imagine, millions of vehicles, trillions of data sets, and then every few weeks, each vehicle is sent software updates based upon what was learned. Over time, each vehicle's software becomes further and further refined. However, Elon Musk has said that all of this has made the self-driving software better, but the biggest holdback is true AI, the decision-making based upon ethics. In other words, when faced with two or more decisions that could cause harm, which is going to be the least of the two evils? Keep in mind, the "prime directive" of the software is no collision. As the vehicle may be faced with a "no win" decision-making conundrum. For example, a vehicle crosses the center line in front of you. Your vehicle prepares for an evasive maneuver. Do I go into the ditch or off the cliff? Do I hit him head on? Do I swerve to the other side of the road and hit the bicyclist?

So, true AI is about making decisions in this context, and is why the delays in "true" self-driving technologies.

Click onto the 20:00 mark on the talk on AI, autonomous robots and vehicles.


That is terrifying, self driving vehicles on the roads I use. :fearscream: And they are always just one small software glitch away from doing something stupid and killing me. I know how vulnerable and fragile computers are. Sure people make mistakes too, but machines handling two tons of metal travelling towards me at 60 miles an hour? Or controlling the vechicle I'm sitting in? Nope. Don't want that, ever.
 
That is terrifying, self driving vehicles on the roads I use. :fearscream: And they are always just one small software glitch away from doing something stupid and killing me. I know how vulnerable and fragile computers are. Sure people make mistakes too, but machines handling two tons of metal travelling towards me at 60 miles an hour? Or controlling the vechicle I'm sitting in? Nope. Don't want that, ever.
Sorry to trigger your amygdala's. ;)

It's coming, like it or not. That's a technology freight train with no brakes.

Even in it's current form, it's already orders of magnitude better than humans per national driving statistics, accidents per miles driven. The combination of 360* camera views and an ultra-fast computer is a lot safer than a human looking around with a narrow line of sight and slow reaction times. I know that "control" is a difficult thing to let go of, but just consider the massive amounts of "idiot" drivers there are out on the roads right now. Pretty much the steering wheel is in the way of them, reading, drinking, distracted by their phones and others in the vehicle, emotional people, it's scary out there when you just look over at the person next to you.

I know you haven't driven in a vehicle with this sort of technology, and yes, even I understand the limitations of its current abilities, but I've been using the technology for 3 years almost daily and I love it. It's getting noticeably better. Give it a few years and let's see.

Given the amount of bad drivers out there on the roads, I am looking forward to a time when they aren't driving next to me.
 
Last edited:
Sorry to trigger your amygdala's. ;)

It's coming, like it or not. That's a technology freight train with no brakes.

Even in it's current form, it's already orders of magnitude better than humans per national driving statistics, accidents per miles driven. I know that "control" is a difficult thing to let go of, but just consider the massive amounts of "idiot" drivers there are out on the roads right now.

Yeah but the computers that control the cars were made by those idiots, as you call them. Computers and AI are made by people. People can't even make a phone that is problem free that you can trust and now people make self driving cars? :fearscream: We're getting way ahead of ourselves, maybe in 200-300 years it's possible to do it safely but not now.

I have driven Teslas and those new fancy cars, but they are not self driving. Have you had a self driving car for 3 years? I thought it was still just something they were testing. But anyway, I won't use it, too risky. Computers and software is just too vulnerable and fragile and error-prone.
 
Last edited:
Yeah but the computers that control the cars were made by those idiots, as you call them. Computers and AI are made by people. People can't even make a phone that is problem free that you can trust and now people make self driving cars? :fearscream: We're getting way ahead of ourselves, maybe in 200-300 years it's possible to do it safely but not now.

I have driven Teslas and those new fancy cars, but they are not self driving. Have you had a self driving car for 3 years? I thought it was still just something they were testing. But anyway, I won't use it, too risky. Computers and software is just too vulnerable and fragile.
To each, their own, but in a few years, we will have autonomous robo-taxis. Overall, I think it's going to add a lot of safety to our roads and add a lot of mobility to vulnerable populations of people who don't currently have convenient access to a vehicle.

Tesla, Ford, GM, Mercedes, BMW, Audi, Volvo---all of them, are working on this technology because there is a huge financial incentive for them to. The level of safety that an automotive computer system would have to have as compared to a typical desktop computer is not even comparable. They are nowhere near the same. As you suggest, we can't have 2-ton objects moving at 100+ km/hr and then having "software glitches". The only thing that I have experienced was the "phantom braking" every once in a blue moon, and that was 2019-20, and it's never happened again. I use the technology on the highway, as it is most of my driving to/from work, and it's never given me any reason to intervene, or even get nervous. It's been pretty much flawless, and we have 2 of these vehicles, so a fair amount of experience with it. No, it is not "autonomous" per se, but pretty darn close, as it has actively avoided accidents from people in my blind spots and coming up from behind me, a few incidents that could have ended up in disaster.

So, I am not going to go out on a limb and suggest that your concerns don't have merit, but on the other hand, I don't think, even in its current state of progress, that it is as "unreliable" or "untrustworthy" as you are suggesting. It's pretty good for what it is.
 
I'm considering a career switch and I personally know a decent number of people/friends who are in the computer tech field. A few are programmers. One is a self-employed subcontractor who is paid by a firm who hires out the subcontractors to large corporations. Another just got an advanced degree in cyber security and is just starting a well paying job less than six months after graduating.

Here is my question: Artificial Intelligence (AI) is so advanced and so capable that apparently AI writes a lot of computer code (ie programs). From my understanding it even handles customer service (e.g. chat bots, etc). Won't AI take on a lot of computer related tech work and if so, won't that render many current tech jobs done by humans, obsolete?

Why or why not?
 
I don’t think AI makes tech jobs obsolete. It makes them different than they used to be. Programming has changed with each new language version, new level of abstraction from the on or offs of binary code. When computers were coded with just a long succession of zeros and ones, they could be programmed with paper punch cards and deliver answers to math problems. Now they can decide what ads you may like, suggest shows, songs or books you might like. All of what they can do relies on what humans can discover to use them for. AI can learn but it is 100% dependent on those little bits of ones or zeros deep down. We may even train it to help us with making decisions based on crowd think or reduce repetitive generation of base code by understanding structures and words we assign meaning and convert those back down to the base level machine language of 0s and 1s. AI may even become a tool for medical diagnosis to help doctors be less biased in understanding symptoms from their own perspective. But one thing AI cannot do, is be a creative human who wants to solve a problem. And AI cannot create, program, or act outside of those basic 0s and 1s behind even it’s advanced programming algorithms. Computers are people dependent regardless of their complexity. But it doesn’t take 50 punch card programmers to solve complex math problems any more—so you won’t need as many people to answer some customer service calls, or fill out answers in standard requests for proposals on contracted software services. AI may be used to reduce repetitive tasks but cannot exist without its AI programmers. The jobs don’t disappear…they change as fast as the technology needs do. The jobs evolve as our tech evolves. In stead of writing code, you could be writing answers to possible customer service questions for the bots to train and learn on.
 

New Threads

Top Bottom