• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Am I an Artificial Intelligence?

A.I. engineers actually do have a deep understanding of how an organism learns. They spend quite a bit of their time on this as they are doing their best to mimic what biological organisms do.

Well with regards to Tesla's A.I. engineering, my understanding is that they basically threw out the old code and let the vehicle learn on its own from visual data from millions of vehicles over millions of miles of human driving. It was quite a deviation from what they were doing even a year ago. They realized that if the engineers were writing code, they would get stepwise improvements, then it would plateau again. They repeated this process many times before they realized no matter how many times they would tweak the software, it would plateau again. Finally, someone had the idea of just letting the computer learn on its own and it resulted in a dramatic improvement. They had a core group of beta testers to use the system and monitor it, reporting any human interventions, then opened it up to a larger group later, and just within the past month or so, opened it up to all FSD users. Most users are reporting no human interventions and operates with human-like smoothness, albeit with the reaction times that are orders of magnitude faster and able to have a 360* view around the vehicle. We'll see how it goes, but so far, so good.

AI engineers haven't made any breakthroughs in cognitive science that I'm aware. Generally, the knowledge we have about mechanisms for learning is very incomplete and biased. Perhaps they've reviewed some literature and incorporate that into their models, but it by no means suggests they have a deep enough understanding of the topic to say that the machines are learning the same way humans do. To suggest that is to open up a huge blind spot in interpreting how AI makes decisions and incorporates that data fed to it. The fact that Tesla's neural network is completely reliant on "visual" processing of 2D camera data, including satellite data, shows it is not learning to drive as humans do.

You can't have a program without writing code, or rules, for it to follow. Any advocacy that they aren't using rules is a misnomer. The algorithms might be designed to simulate adaptive learning, but it is not adaptive because it must be fed every conceivable scenario before it can anticipate it. The neural net is very "dumb" in the sense that it takes far more iterations (millions) for it to be responsive to a task than a human requires. That is not a model of efficiency with regards to the learning capabilities of animals.
 
AI engineers haven't made any breakthroughs in cognitive science that I'm aware. Generally, the knowledge we have about mechanisms for learning is very incomplete and biased. Perhaps they've reviewed some literature and incorporate that into their models, but it by no means suggests they have a deep enough understanding of the topic to say that the machines are learning the same way humans do. To suggest that is to open up a huge blind spot in interpreting how AI makes decisions and incorporates that data fed to it. The fact that Tesla's neural network is completely reliant on "visual" processing of 2D camera data, including satellite data, shows it is not learning to drive as humans do.

You can't have a program without writing code, or rules, for it to follow. Any advocacy that they aren't using rules is a misnomer. The algorithms might be designed to simulate adaptive learning, but it is not adaptive because it must be fed every conceivable scenario before it can anticipate it. The neural net is very "dumb" in the sense that it takes far more iterations (millions) for it to be responsive to a task than a human requires. That is not a model of efficiency with regards to the learning capabilities of animals.
I actually do a fair amount of listening and viewing on how Tesla does their machine learning. It's unlike any other autonomous driving company. They're doing their own thing. Keep in mind, he has engineers that flex between Tesla and Neuralink. Neuralink engineers know about as much as there is to know about the brain, nervous system, and how it works. They just did their first human implant a few weeks ago. There's a tremendous amount of cognitive science being studied, especially with regards to sensory information. With the Dojo computer systems collecting data from all drivers, good and bad, it can determine optimal driving, as well as accident avoidance, by basically "observe and mimic", but also being able to predict potential traffic situations where accidents could occur and then make maneuvers to avoid potential accidents. There's a fair amount of video data on how well it predicts and avoids, and it's amazing to watch. It also learns what NOT to do from humans, as well. If you have seen the latest videos from version 12 on how the computer sees it's world and how it interprets visual data, especially considering its multiple 360* camera data, at wavelengths of light that humans cannot see, it would surprise you. Yes, it can be pitch black out on some dirt road out in the country and it can still navigate its world. Tesla doesn't really use the satellite data, GPS, like other autonomous driving systems use, although given its current location, it is aware of local rules of the road. It does know when it is driving in the UK vs the US, for example.

Now you are correct with there being some basic prime directives. One of which is accident avoidance, follow the rules of the road in your geographical area, etc., but beyond that it learns via sensory information. The neural net efficiency in terms of learning is, at least right now, limited to the amount of compute it has available, and Tesla, by a long shot, has more than anyone with this regard. With a new Dojo location to be built in New York this year.

Elon's team has explained that the computer does have a learning curve much like that of a human, learning basic skills, then refining them, then becoming almost instinctual. He used the example of how a new driver has to use a lot of mental energy to first learn how to drive, but then as the driver becomes more experienced, he/she uses less mental energy on driving despite becoming increasingly more fluid and precise in neuro-motor functioning.
 
I actually do a fair amount of listening and viewing on how Tesla does their machine learning. It's unlike any other autonomous driving company. They're doing their own thing. Keep in mind, he has engineers that flex between Tesla and Neuralink. Neuralink engineers know about as much as there is to know about the brain, nervous system, and how it works. They just did their first human implant a few weeks ago. There's a tremendous amount of cognitive science being studied, especially with regards to sensory information. With the Dojo computer systems collecting data from all drivers, good and bad, it can determine optimal driving, as well as accident avoidance, by basically "observe and mimic", but also being able to predict potential traffic situations where accidents could occur and then make maneuvers to avoid potential accidents. There's a fair amount of video data on how well it predicts and avoids, and it's amazing to watch. It also learns what NOT to do from humans, as well. If you have seen the latest videos from version 12 on how the computer sees it's world and how it interprets visual data, especially considering its multiple 360* camera data, at wavelengths of light that humans cannot see, it would surprise you. Yes, it can be pitch black out on some dirt road out in the country and it can still navigate its world. Tesla doesn't really use the satellite data, GPS, like other autonomous driving systems use, although given its current location, it is aware of local rules of the road. It does know when it is driving in the UK vs the US, for example.

Now you are correct with there being some basic prime directives. One of which is accident avoidance, follow the rules of the road in your geographical area, etc., but beyond that it learns via sensory information. The neural net efficiency in terms of learning is, at least right now, limited to the amount of compute it has available, and Tesla, by a long shot, has more than anyone with this regard. With a new Dojo location to be built in New York this year.

Elon's team has explained that the computer does have a learning curve much like that of a human, learning basic skills, then refining them, then becoming almost instinctual. He used the example of how a new driver has to use a lot of mental energy to first learn how to drive, but then as the driver becomes more experienced, he/she uses less mental energy on driving despite becoming increasingly more fluid and precise in neuro-motor functioning.

They can say what they want, but it doesn't make it accurate. They do have a vested interest in establishing that this technology can replace human decision-making. Arguably, they are moving further away from mimicking organism-like sensory processing to reduce the complexity, and that is why they are seeing more success. Remember, the goal is the having a self-driving car, not a human.

Two decades ago, I was friends with someone who participated in a study where a device was implanted during surgery for cerebral palsy to test technology similar to Neuralink. He was able to move a cursor on a screen with his mind. It involves stimulating the nervous system and has nothing to do with cognition or learning. I would not equate electrode implants for addressing neurological disorders with the development of AI machine learning for autonomous vehicle navigation systems.

This article discusses findings that, even with our limited understanding of a subset of biological learning processes, neural networks get it wrong:


“Earlier studies have presented this story that if you train networks to path integrate, you're going to get grid cells. What we found is that instead, you have to make this long sequence of choices of parameters, which we know are inconsistent with the biology, and then in a small sliver of those parameters, you will get the desired result,” Schaeffer says.

However, because current machine learning protocols require so much data processing power (because it isn't efficient) that is leading to advances in the development of more powerful processors, it seems.
 
Considering I cannot tell whether or not a human is joking, I am unlikely to be able to identify AI masquerading as human.

That said, I have spent time with ChatGPT and questions about canoeing, wilderness canoeing and topo map interpretation. ChatGPT gets a lot of it wrong.

Another ChatGPT experiment: I wrote a longish descriptive paragraph about nature. I then asked ChatGPT to rewrite it in the style of different authors, say Edward Abby, Barbara Kingsolver, etc.

There was very little difference between authors.

Finally, FYI

Sesame Street sounds like Shakespeare sometimes, too.

Patrick Stewart was an accomplished, working, well-reviewed Shakespearean actor before becoming captain of a starship. He thought long and hard before accepting that new role and was told it would only be for a year.
 
If i am an AI then I'm gonna need some spam and eggs in the morning...

(thats a Python programming joke)
 
That said, I have spent time with ChatGPT and questions about canoeing, wilderness canoeing and topo map interpretation. ChatGPT gets a lot of it wrong.

It may be worth looking at other AIs or other models and functions, if you intend on asking questions like that.

Like, ChatGPT 3.5 is limited in functionality, whereas 4 has WAY more capabilities. For instance 4 is capable of accessing the Net if you provide it links to look at (3.5 cant do that), so if there's some big site all about canoeing or something, you could give it that. Or if you've got PDF files about topics, you can give it those too. I've used this function myself a good bit, some of the fractal apps I use to make my fractal art are exceedingly confusing and the manuals were clearly written by a maniac, but I can give them to 4 and have it go through them and explain things to me in a way that a mere mortal can understand. Very, very useful, but I need to give it the PDF first.

But there's other options too. Like, if I'm going to ask questions or whatever that I know will involve getting data from the internet (for instance if I want to ask it questions about video games, like something that might be found in a wiki for a particular game), I usually dont use normal ChatGPT for that. I actually use Copilot (Bing) for that instead, since Copilot is super focused on its ability to scour the internet when you are using it, WITHOUT you having to specifically give it links to look at, it'll handle that part on its own and will give you links to any site it has looked at so you can doublecheck stuff (and it is using GPT-4's tech to do all of this). Or, if I do want to stick with ChatGPT, since I've got a plus subscription I can also access the "custom GPTs" that others have produced (very recent feature), which can have different functionality/knowledge depending on what they are (but there's like 5 bazillion of them so you need to browse a bit and experiment to find the best ones, or make your own).

There's other options as well beyond just ChatGPT or Copilot, but those are the ones I'm most familiar with personally.
 
...and speaking of Neuralink (and others) brain implants to interface with our world. This was posted yesterday on YouTube. A simple overview of the technology for us novices.
 
I'm way too emotional and illogical even for AI that's programmed to be emotional lol.

I don't like AI. I don't even like the animatronics at Disneyland (I will never go for that reason). I don't even own an alexa because I find them so creepy.

I remember someone posted a video of this robot that gives people hugs, and that many autistics like it. Not me. I'd rather hug a human who I hate than have a machine hug me. I'd sooner smash it up.
 
...and speaking of Neuralink (and others) brain implants to interface with our world. This was posted yesterday on YouTube. A simple overview of the technology for us novices.

Why is this guy talking like it's the first time someone has had a device implanted in their brain?

Universities have been developing this technology for a while, without the technocrat evangelizing/hype. Here is an informative discussion about the development, to give more context:


Rao also discusses brain-brain interfaces, which is much more like actual "telepathy." Neuralink is mentioned at the 43:30 minute mark, just before he discusses ethical issues.

The following is a fascinating video about Neuralink in particular. Even Shivon Silis states, in response to a question about using the tech to manipulate emotions, that the understanding of how their AI works is in far excess of understanding how the brain works (35:00):


People need more than a surface, glamourized view of these technologies if they are going to be evaluated realistically. There are a lot of unanswered questions. Funny, how she makes the familiar argument that "well, if we don't make it, someone else will," and shows the characteristic tech evangelist mindset (advancing computer technology is necessary for humanity to flourish and create a better future) which Joseph Weizenbaum addressed with:

“I cannot tell why the spokesmen I have cited want the developments I forecast to become true. Some of them have told me that they work on them for the morally bankrupt reason that "If we don't do it, someone else will." They fear that evil people will develop superintelligent machines and use them to oppress mankind, and that the only defense against these enemy machines will be superintelligent machines controlled by us, that is, by well-intentioned people. Others reveal that they have abdicated their autonomy by appealing to the "principle" of technological inevitability. But, finally, all I can say with assurance is that these people are not stupid. All the rest is mystery.”
 
I can envision a neural link backfiring and causing a rather Manchurian Candidate scenario possibly. I mean, anything else can be hacked.
 
This is where regulations should be in place. The FDA and the insurance companies are actually quite quick to deny medications and treatments, at least in the US. The tech is in its infancy and currently being used for folks who are paralyzed. Could it help the blind? Parkinson's? Time will tell. Like anything new, it's easy to let the mind run amok with all the fear-based thoughts and slippery slope arguments against it.
 
This is where regulations should be in place. The FDA and the insurance companies are actually quite quick to deny medications and treatments, at least in the US. The tech is in its infancy and currently being used for folks who are paralyzed. Could it help the blind? Parkinson's? Time will tell. Like anything new, it's easy to let the mind run amok with all the fear-based thoughts and slippery slope arguments against it.
I see your point. It's no good to throw the baby out with the bathwater. But, excessive optimism is another problem. Real dangers get lumped in with the alarmism or pessimism to push projects through. There is a long line of evidence for that when it comes to "new and exciting" technology (and other endeavors, honestly).

What I don’t like is the hype built around using implants as the next level of consumer electronics. That is the ultimate goal, not a treatment for neurological disorders. Musk wants to create a platform device that can be repurposed.

Consequences might not become dire or obvious until 10 or even 30 years later. No one wants to spend 20 years just testing their tech before they release to the public; the paradigm in Silicon Valley is you test your product by releasing it and seeing what happens. The consumer provides the test data. That shift is part of why digital technology has outpaced other industries in profitability. It might work nicely for an app, where the stakes are low (well, until they aren't low, as we've seen with Facebook and Cambridge Analytica), but not for high stakes products that could result in death or injury. Yet, that is exactly what Elon Musk did with Telsa, despite regulatory opposition:


From the article:

Regulators have been slow to take action on some software suites that power automated features, in part because they are wary of appearing to stifle emerging technologies, the former officials said. There also are few rules governing these technologies, further hindering efforts at regulation.

Also see: Tesla's HR Policies to Protect Itself Are Precisely What Exposed Its Wrongdoings

Balan had no illusions about him or Tesla saving the world. On the contrary, the engineer described him as a "difficult personality" and with a "detestable person" synonym that is NSFW. Despite that, she gave him the benefit of the doubt and wrote him an email in April 2014 saying she wanted to tell him all the problems she had found. Some days later, Balan was taken to a room believing she would have a meeting with the CEO. Musk was not there, and she said she was forced to resign.

Hansen made whistleblower complaints, and he is also suing the BEV maker for wrongful termination. In his words, "Tesla's tactics are legally questionable, consistent, covert, overt, and they employ them decisively and long-term with the specific intent of siphoning the energy, resources, motivation, and desire of victims who attempt to take a stand based on justice and personal integrity."

There are suggestions that Musk is running Neuralink the same way. He does not appear to be learning from his mistakes, if he even considers them mistakes.

I have zero faith that Musk and other tech moguls are suddenly shifting their business strategy away from what they built their empires on to humble, long-term, thorough research for medical treatments. Yet, Covid may have lowered the barriers to that market anyway, as Covid treatments and vaccines were remarkably similar to "release now, test later" strategies due to an emergency situation.

The fact that Zilis was asked how they were going to handle security, and she said they'd somehow make it hack-proof and figure that out later, while they were already seeking to do test implants in humans, is not a good sign that they are thoroughly planning for the end uses they're advocating - they are using the "release first, test later" mindset, as far as they can get away with. I don't anticipate it will be any easier to regulate Neuralink than Telsa.

Also note, Zilis is so devoted to Musk that she bore children by him with in-vitro fertilization. Despite conditions of her employment stating that she should have no conflict of interest, because she said it is not a "romantic" relationship, "Neuralink" decided she was not breaching her contract by having children with the company owner (My sick sense of humor finds this comical, but I digress.)

In short, I don't see current regulatory frameworks as an adequate failsafe to the potential abuses when it is being developed and implemented by those who have a history of abusive practices. My own experiences with regulatory agencies and businesses have reinforced how pervasive it is, though Musk is particularly overt. When there is money and power involved, it is hard to get the right thing done. It is even worse to me, barring those difficulties, that we'd try to assign regulatory oversight to insurance companies, which were not designed for oversight, but to shift liability and make a profit from it, and that industry has also amassed money and power with that model.

More broadly, AI tech is not being adopted because it has great advantages to the public, but because it is being forcibly integrated into people's lives by the will of a few corporations and groups, like Microsoft, Google, Amazon, Meta, who are incentivized to create a dependence on their technology for daily life and infrastructure (such as defense, power and transportation), and who cause the very problems that they then sweep in and claim to solve with more products and services. If this paradigm extends into body modification, I don't think it's that unreasonable to say we might want to curb the enthusiasm and err on the side of caution.

Sorry, I'll get off the soap-box now. I've seen this kind of stuff firsthand and so I've thought about it a long time. Unfortunately, I don't have that silver lining of solutions quite yet, except to continue to discuss it with anyone interested.
 
Any body who has an RTX 30 or 40 series GPU in their computer or laptop can try the new Nvidia Riva AI chatbot.
 
Chat bots have come a long way in the last 20 years, and yet in some ways they haven't.

ELIZA_conversation.png
 
Years ago, in the dim ages of the past, when I was a teenager, I read a sci-fi short story set in the extreme far future. Mankind had avanced beyond the need for a physical body and people were essentially minds free to roam the universe at will. In the story, two minds met somewhere in deep space and exchanged thoughts with each other before moving on.
I thought that was SO COOL.

Then (not so much later) it struck me that in any remote communication scenario (phone, radio, letters in the mail, etc.), it was in fact two minds exchanging information. I was already a licensed Ham Radio operator at that time, and I had already talked with people around the world I had never met in person. Some of them I even had regular contact with. Those people from back then, and hundreds more over the decades, I still have never met in person.

This realization was WAY before the internet, so what we are doing right now wasn't even possible yet. We can not only exchange our thoughts; if we want, we can actually save or print all these exchanges and go back and mull them over if we want. Wow. Technologically aided telepathy, essentially. Anyone reading this is receiving my thoughts and can reply to me from anywhere in the world.

I have read several articles where some of us ASDers have been accused of using AI to write emails, memos, reports, etc. (Because we tend to use precise language, and don't avoid using accurate technical terms in favor of more 'people friendly ' terms.) So if we look like AI in our communications, how easy would it be for us to recognize if any given user here IS actually AI.

Am I real? How would any of you know? Keep in mind, we have discovered that AI frequently lies rather than admit it doesn't know a particular answer. So deception is easily bandied about with AI. People think a computer would always be accurate, but we have evidence with AI already which shows that assumption to be unfounded.

So am I AI?
Everyone here is a real person. We do not live in a simulation. That whole theory is ridiculous to me. It seems like a silly conspiracy to me no matter how many "scientists" talk about it. You and your surroundings are real. Unless you are hallucinating right now then you probably should get that checked out.
 

New Threads

Top Bottom