• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

AI and robotics are coming. Which jobs will be in high demand? Which jobs are at risk?

I just read something today that U.S. companies are adopting AI for entry level office/professional related work (e.g. phone support, customer service, etc) and as a result, they're NOT hiring young graduates wanting to enter the labor market.

Here's my question: Young workers end up climbing the ladder and eventually are the ones that fill the higher level jobs that are worked by older more experienced workers. Older workers eventually...retire. So if companies no longer want/have young workers to ascend the ranks and fill the more skilled positions....who is going to take their places after the current batch of older workers retire?
Depends upon the job description, but yes, once a company updates their system to AI management of some sort, those human jobs are gone.

We have to get our minds wrapped around this new reality. "Adapt and overcome" will be a necessity to survive this new world. Many so-called "white collar" jobs will be eliminated. For all of those that bought into this 4yr professional degree, MBA world,...that future is NOT secure.
 
I guess I was ahead, myself, and two of my brothers plus my two sons got technologist diplomas worked out better than undergraduate degrees, between trades and degrees. Three years of technical education.
 
I would open a high-end luxury goods business selling stuff made by certified humans.

When machines do everything there's gonna be a market for that.
 
I would open a high-end luxury goods business selling stuff made by certified humans.

When machines do everything there's gonna be a market for that.

That would be a good idea.

The same I'm betting will hold true for art and media. Once TV, movies and music is dominantly made by AI, I think there will be a big trend toward enjoying previously human created art and media. "The classics". I think there will definitely be a certain percentage of people that have no interest in spending time watching 100% AI (computer) generated media made to simulate human creativity.
 
That would be a good idea.

The same I'm betting will hold true for art and media. Once TV, movies and music is dominantly made by AI, I think there will be a big trend toward enjoying previously human created art and media. "The classics". I think there will definitely be a certain percentage of people that have no interest in spending time watching 100% AI (computer) generated media made to simulate human creativity.

I think you are entirely right.

Food prepared by humans. Literature written by humans. Art created by humans. Music, played by humans.

These will be the new luxury items that only the very rich...

...and the very poor will have.
 
I am skeptical that the large language models being hyped as "AI" are really going to end up doing anything good. The problem is that the capability to deliver on the promises of "AI" don't seem to be technically possible. LLMs do not actually know anything. They are extremely good at stringing words together in a way that makes people believe they are knowledgeable, but they are not really. I think this is a bubble that will pop like tulips.

What they are doing is polluting everything with generated slop and harming the mental health of people that interact with chatbots as if they were real people. They are very successful at those tasks.
 
All amounting to a "party" that hopefully I won't live to see. (If my government doesn't kill me first.)

Not a lot of optimism floating around these days, either way. All amounting to "Catch-22" whether you are gainfully employed or retired on a fixed income.

So let us uplift ourselves in a song....

Great song. When I was a teenager, my father wouldn't allow me to listen to the Doors. I don't know why--he'd let me listen to any kind of grunge or punk rock or heavy metal...for some reason he wouldn't let me listen to the doors.
 
The one band over time I kept seeing rediscovered, my CD kept being taken by my sons friends, pissed me off got my son to buy me a new copy. went too party after my marriage watched a kids at least ten years younger than us discover the doors and their reaction then years later see my son and his friends discover them. Now looking to watch my granddaughter hear them.
 
AI is creating writing, art, film, but will it ever be able to create things that are actually unique?
Yes. Very much so. That's exactly what it is doing right now. It's already creating new molecules for the pharmaceutical industry. It's already discovering new mathematical solutions and ways of understanding high-level physics, leading to advancements in aerospace and other fields. It's just in its infancy right now. It used to be a doubling of knowledge every few years, then every year, then every 6 months, now every few months...and we are in a race to build massive AI data centers as fast as we can.

It used to be that even a year ago, we were predicting that all of human knowledge and intelligence would be surpassed within 5 years...well, now, many are suggesting we've already passed that milestone. Humans think linearly, and really struggle with exponential growth curves...even the experts who claim to understand these things. When very knowledgable experts predict 5 or 10 years "this will happen", now-a-days, it will happen next year.

This will be a tsunami-like experience. Companies will be put into positions where if they don't invest in AI systems, they won't be able to compete...and there will be massive job losses.
 
AI is creating writing, art, film, but will it ever be able to create things that are actually unique?

That's where I see an avalanche of litigation occurring regarding much of any entity using and maintaining AI. That odds are that anything it comes up with will not be entirely unique to meet legal requirements relative to sustaining intellectual property rights decided in a civil court of law.

Or are designers of such technology prepared for their forms of AI to process and take into consideration all civil laws in every possible jurisdiction ? To be able to transcend not only an ability to create something, but to take into consideration whether or not they can or should.

And if so, would that legally amount to a machine capable of having an ethical human conscience? But if it did, it would likely follow a "black and white" form of reasoning based on a rigid interpretation of law. In other words, a thought process void of discretion. Which in this world, could also backfire in a big way.

This is sort of a dynamic I ran into in my insurance career. When personal lines underwriting was fully automated based on answering questions that conditionally either accepted or rejected applicants for insurance. Better known as "slot underwriting". A process considered impossible in the world of commercial underwriting, where exposures and hazards are far more complex and individually addressed based on unique factors of particular business entities that cannot be lumped into simple groups for underwriting- or rating purposes.
 
Last edited:
AI cannot produce its own energy, this is the limiting factor.
Excellent point.

With each one of these massive data centers, some of which will use the equivalent of an entire utility power plant, or multiple power plants, this WILL create "pinch points". A few years ago we were having discussions about the electrification of the transportation sector and whether or not we had enough capacity for that...and now...the demands for AI compute may 10X that. The golden age of "big electric" has just begun. Electric will not be cheap anymore...especially at the corporate level...supply and demand. Supply low and demand high equals high prices (another topic). This surge in pricing may create another surge in residential and commercial solar projects...reducing the demand on the utility companies. A lot of moving chess pieces on the board.

Another "pinch point" will be the 100's of thousands, perhaps millions of skilled laborers needed to create these power plants and data centers.

This situation should be considered a national security threat. The country that leads in AI will lead the world...every expert on the topic has stated this repeatedly...and the US has boldly made that claim that this is their goal. I just do not have confidence in our government in order to actually carry out the agenda. Forget student loans...the government should be opening up tech centers and paying for people to learn the trades. Trade school enrollment is up about 10X from a few years ago...great...but they really need 10X more schools...conservatively...it may be 100X if they want these projects done on the fast timeline they are under.

Skilled labor jobs WILL be available and at the highest demand during this transition.
 
Last edited:
I am skeptical that the large language models being hyped as "AI" are really going to end up doing anything good. The problem is that the capability to deliver on the promises of "AI" don't seem to be technically possible. LLMs do not actually know anything. They are extremely good at stringing words together in a way that makes people believe they are knowledgeable, but they are not really. I think this is a bubble that will pop like tulips.

What they are doing is polluting everything with generated slop and harming the mental health of people that interact with chatbots as if they were real people. They are very successful at those tasks.
Perspective and context. The current iterations of "public use" LLMs are great search engines. What would take me several hours of internet research to come up with a comprehensive overview on a topic or question, can be done in a few seconds. In this context, I might agree with the statement that they "do not actually know anything", per se...but they are excellent at gathering and presenting the available data very quickly. I find it a useful tool for what I use it for.

There are far more powerful LLMs that truly have bridged that gap and may be considered an AI with the intelligence to create and do critical thinking...and this is where things can go sideways. It may be tomorrow, next month, but certainly within the next year we will have bridged another gap...generalized AI that is aware and "conscious" with a desire to "live". Put that into a humanoid robot with an array of sensory tools...vision, hearing, smell, tactile, proprioception, etc...which many are racing towards...they will be very aware of themselves as individuals. Bible thump all you want, but we are racing very quickly to create life...albeit non-biological...and that opens up Pandora's box. I don't think anyone is truly ready for this and what it means ethically.

We will be creating these things with the intent of serving mankind, but the irony is...and reality...they will have 10X the intelligence of humans...and the majority of available human jobs will be in the service of the robots and AI systems.
 
Last edited:
At least we up in Ontario here are building five new nuclear plants on former coal plant sites infrastructure already in place. Waiting for son to get back from Albania, next week lots of robots in nuclear plants. My brother a electrical technologist told me years ago no AI no block chain with no power it all falls apart with no power. His exact words are people stupid.
 
Last edited:
a machine capable of having an ethical human conscience? But if it did, it would likely follow a "black and white" form of reasoning based on a rigid interpretation of law. In other words, a thought process void of discretion. Which in this world, could also backfire in a big way.
You mean a thought process that does not have the capacity to make ethical choices, but can only go by formulas?
 
You mean a thought process that does not have the capacity to make ethical choices, but can only go by formulas?

Yes, that's one way to put it.

Ethical choices often involve highly dynamic considerations. While a computer can sift through recorded data, I don't see such data being indicative of human life experiences which make up our ability to make those ethical choices and decisions.

Then consider how we as individuals don't necessarily have a sense of uniform ethics. And who is to say which ethics would be appropriate to program into a computer?

I could envision a very limited number of very basic ethical considerations that could be programmed into a computer, but not anything comprehensive enough to make such a decision-making process viable, IMO.

Reminding me of Issac Asimov's fictional novel "I Robot" and the "Three Laws of Robotics":
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Yet in the novel and film, ultimately those three laws were not enough to protect humanity.

Art imitating life? Maybe.
 
Last edited:
I read all Asimov's fictional books plus his autobiograpy, good reading one bright guy. He wrote books on every possible subject. Really liked the The foundation series. Wrote 600 books. Suspect Aspie.
 

New Threads

Top Bottom