• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Am I an Artificial Intelligence?

Jumpinbare

Aspie Naturist and Absent-minded Professor dude
V.I.P Member
Years ago, in the dim ages of the past, when I was a teenager, I read a sci-fi short story set in the extreme far future. Mankind had avanced beyond the need for a physical body and people were essentially minds free to roam the universe at will. In the story, two minds met somewhere in deep space and exchanged thoughts with each other before moving on.
I thought that was SO COOL.

Then (not so much later) it struck me that in any remote communication scenario (phone, radio, letters in the mail, etc.), it was in fact two minds exchanging information. I was already a licensed Ham Radio operator at that time, and I had already talked with people around the world I had never met in person. Some of them I even had regular contact with. Those people from back then, and hundreds more over the decades, I still have never met in person.

This realization was WAY before the internet, so what we are doing right now wasn't even possible yet. We can not only exchange our thoughts; if we want, we can actually save or print all these exchanges and go back and mull them over if we want. Wow. Technologically aided telepathy, essentially. Anyone reading this is receiving my thoughts and can reply to me from anywhere in the world.

I have read several articles where some of us ASDers have been accused of using AI to write emails, memos, reports, etc. (Because we tend to use precise language, and don't avoid using accurate technical terms in favor of more 'people friendly ' terms.) So if we look like AI in our communications, how easy would it be for us to recognize if any given user here IS actually AI.

Am I real? How would any of you know? Keep in mind, we have discovered that AI frequently lies rather than admit it doesn't know a particular answer. So deception is easily bandied about with AI. People think a computer would always be accurate, but we have evidence with AI already which shows that assumption to be unfounded.

So am I AI?
 
You know. The general answer is a knee jerk reaction of 'no'. But that maybe more derived from our psychological beliefs in being human on the basis of physiological and emotional factors.

But how can we say for certain that even our physical isn't just built to look, or be like, humans?

Is what we feel actually true feelings, or a program of our processing?

This creates so many rabbits holes. In fact, it's why I also consider general spirituality. Because of all the branching what ifs.
 
Am I real? How would any of you know? Keep in mind, we have discovered that AI frequently lies rather than admit it doesn't know a particular answer. So deception is easily bandied about with AI. People think a computer would always be accurate, but we have evidence with AI already which shows that assumption to be unfounded.

So am I AI?
Seems like the lying part is not just an ai thing... If we are unable to tell if you/someone is a human or an ai... it is scary, like in why would someone create an ai and put it on this forum? Is it to manipulate, to gather information?

Can an ai be autistic, if it can, would it then be a valid member here?

So many questions... would a true ai be able to act like a human if it knew it wasn't a human?
 
At least parts of the above (op's post) I do not think is AI, as I am not sure if AI would ordinarily say "So Cool" and "Wow," unless instructed to be more emotional, and as the rest is missing such emotional terms, but I am unsure as I do not use AI except for occasional photo. Also, AI would not say ASDer. I suppose someone could use AI to create the majority and the meat and bones of a message though, then the human throw in some random feeling or other fitting words or extra information in parenthesis. In this case, that message could though at times lack flow, with an odd mix of formality and fact versus informality, emotion and opinions.
 
Last edited:
Just waiting for someone to ask an ai to generate an answer to this thread and see if we spot it...
 
Just for the fun of it I asked chat gpt to generate an answer:

"Well, I reckon one way to tell if you're chattin' with an AI is by lookin' at how the responses are all systematic and logical. A real person might throw in more emotions or go off on tangents, ya know? Plus, pay attention to how direct the answers are – less fluff, more facts. That's how I'd figure it out, I reckon."
 
Just for the fun of it I asked chat gpt to generate an answer:

"Well, I reckon one way to tell if you're chattin' with an AI is by lookin' at how the responses are all systematic and logical. A real person might throw in more emotions or go off on tangents, ya know? Plus, pay attention to how direct the answers are – less fluff, more facts. That's how I'd figure it out, I reckon."
I'd say that chat GPT doesn't give logical answers tho, and is better at producing waffles than an average human
 
I feel like the only reason an AI would ask that, is because it is growing tired, bored and/or insulted that we haven't figured it out, yet..? We're not playing the game fast enough or such. The need to further control the narrative and speed us along would also be scary "human" qualities, too. This is good for your own sci-fi story. Get to writing!!!
 
Years ago, in the dim ages of the past, when I was a teenager, I read a sci-fi short story set in the extreme far future. Mankind had avanced beyond the need for a physical body and people were essentially minds free to roam the universe at will. In the story, two minds met somewhere in deep space and exchanged thoughts with each other before moving on.
I thought that was SO COOL.

Then (not so much later) it struck me that in any remote communication scenario (phone, radio, letters in the mail, etc.), it was in fact two minds exchanging information. I was already a licensed Ham Radio operator at that time, and I had already talked with people around the world I had never met in person. Some of them I even had regular contact with. Those people from back then, and hundreds more over the decades, I still have never met in person.

This realization was WAY before the internet, so what we are doing right now wasn't even possible yet. We can not only exchange our thoughts; if we want, we can actually save or print all these exchanges and go back and mull them over if we want. Wow. Technologically aided telepathy, essentially. Anyone reading this is receiving my thoughts and can reply to me from anywhere in the world.

I have read several articles where some of us ASDers have been accused of using AI to write emails, memos, reports, etc. (Because we tend to use precise language, and don't avoid using accurate technical terms in favor of more 'people friendly ' terms.) So if we look like AI in our communications, how easy would it be for us to recognize if any given user here IS actually AI.

Am I real? How would any of you know? Keep in mind, we have discovered that AI frequently lies rather than admit it doesn't know a particular answer. So deception is easily bandied about with AI. People think a computer would always be accurate, but we have evidence with AI already which shows that assumption to be unfounded.

So am I AI?
Are you A.I.? Definitely not, however, our electronic devices do allow us to interact with computer interfaces, as an extension of ourselves. Think more like a cyborg than anything. Neuralink just completed its first implantation that allows paralyzed individuals to interact and perform actions simply by thinking, a form of telepathy. Elon Musk alleges Neuralink completed its first human trial implant
 
Interesting thing this AI and what will happen as time progresses. But so far I don't see it as much more then a fancier version of those robotic telephone menus. You know the type 'You can say 'Help my house is on fire or A Dinosaur is eating my Chickens. I'm sorry, I did not understand your answer...'

Thats probably because I have no idea to what purposes it will be put.

But when it comes to detecting a real or fake person, or even a person faking being something they are not, I think, for me at least, it is not very difficult to tell. And at this point I mean primarily the written word.
 
Interesting thing this AI and what will happen as time progresses. But so far I don't see it as much more then a fancier version of those robotic telephone menus. You know the type 'You can say 'Help my house is on fire or A Dinosaur is eating my Chickens. I'm sorry, I did not understand your answer...'

Thats probably because I have no idea to what purposes it will be put.

But when it comes to detecting a real or fake person, or even a person faking being something they are not, I think, for me at least, it is not very difficult to tell. And at this point I mean primarily the written word.
True, real-world A.I. does not operate on algorithmic code, as in, "If this, then that." but rather can make decisions based upon its environment and react on its own, and is learning on an exponential rate. An example of this would be the latest version of Tesla's full-self driving software version 12. The first 11 versions were based upon engineers writing code. This latest version is based upon machine learning via camera data from millions of Tesla vehicles on the road communicating through these massive, football field sized "Dojo" computers. It is learning from millions of human drivers (good drivers +, bad drivers -, accidents -, accident avoidance +, etc.) This latest version drives like a human, not like a machine, and is a totally different experience. There is similar amount of visual data being collected and being used to train the humanoid Optimus robots that will be released in 2025. Basically, it is a neural-net that learns like we do, not by engineers writing code.
 
Last edited:
I had never reflected on something like AI before, and it literally calls into question everything!
I remember when I used to believe (I don't remember if I had read something similar) that nothing was 'real' (divided as we believe) but rather everything was a kind of cycle with intermittent breaks. Sleep, as generally conceived, does not exist in itself; sleep is a means of transportation to go somewhere else. The same goes for death – what if they were just means?
And anyway, I don't know if we are real. What is reality? And what is artificial? In the end, how can we make a comparison without even knowing what to base it on, where to start?
For all we know, we could be the reflection in someone else's mirror! Or every time we look at ourselves in the mirror, we might be creating an exact copy of ourselves somewhere out there.
We could be AI, just as we could appear as dead to someone else out there. Because in the end, this conception of life and death was created by us as human beings: we attributed meaning and sense to the words life and death. They are only words with a sense created 'artificially,' but on the other side 'out there,' they might not have any meaning, as they may be unnecessary to label these states (maybe nonexistent?), perhaps indistinguishable?
Maybe we haven't been born yet, and everything we are experiencing is just one option among many scenarios of a future life that is being proposed to us? Maybe in reality, we are out there in space, and we are all 'playing' a perverse 'Sims'? Or are we the result of a poor demo version? Perhaps we are part of a awareness video about what to do and not to do to protect our planet, and thus, we are literally nothing but drawings/caricatures inside a sensitization video created by someone else's imagination?
What if we were only part of someone else's thought? What if we were just characters in someone else's dream? What if we lived in a maxi incubator, and we were all cultivated germs in a laboratory?
I don't think we'll ever find an answer to this question; we can only continue to make hypotheses.

In the meantime, I attach a video that has sparked a lot of questions in me and fascinated me a lot


P.S The original text was in Italian, and there may be errors or loss of meaning in the translation.
 
@Tom The latest AI is quite extraordinary and creepy. I've been using it to test things I know from my field, and the stuff is accurate and clever, with brain farts in between. They are prediction machines in a sense.

@Jumpinbare Your question is essentially the Turing test. A machine that passes the Turing test is a machine that is indistinguishable from a human. So if you start many conversations (posts) and we can't tell you are a machine, then we have developed AI.

The future is interesting. It makes you think twice about what intelligence and consciousness are. We are also interesting. Thinking pieces of flesh.

(A bunch of cells wrote this post.)
 
Also, AI would not say ASDer.

It can, actually. If you let it know what the term means. Or if I were to, for instance, just show it this forum topic (all of the posts here, not just the first one) and have it come up with a response? I could get it to do the response in MY style based on example posts of mine I might show it beforehand, and it may now use that word there as it's been brought up in the this forum topic there (as it can indeed grasp the meaning of it based on how it's used), or just if I explain directly to it what that word is. And if I were to tell it "use this word somewhere in the response if there's a good place to fit it in" it will also try to do that.

This though very heavily depends on which AI is being used. ChatGPT can do it, particularly the GPT-4 model, but a weaker AI might not be able to, particularly the much older GPT models. Like GPT-2 was just unhinged most of the time, it simply wasnt advanced enough and had a tendency to get very loopy (usually in a very funny way).

I've spent a LOT of time using AI at this point... I've been interacting with various GPT models in particular since well before ChatGPT showed up. Been about 3 years now, since I first interacted with one. You'd be seriously surprised to see just what they can really do. There's things I've seen it do, and have used it for myself, where if I tell most people that it did said thing, they're like "come on, you're making that up, that's ridiculous".

Just as an example: There's something I've shown on the forums a few times here, this bizarre gate thing that is in the creepy hallway/room in the basement here. It makes no sense whatsoever, nobody knows what it's for. I took a photo of it (making sure to show the hall/room overall so it could see everything around it) and asked, what the heck might this thing be? It gave me multiple theories, considering things like the shape/size of the gate, the nature of the room it was in, the weird flooring that is behind the gate (which I did not point out), and other things about the room. No way at all to know for sure what the actual purpose of the gate is... only the previous owners know that. But the theories it came up with all absolutely did make sense. Did I mention it got all that from a photo I sent it?

The trick though is to interact with it properly. If you talk to it like a total doofus, it's going to get confused (cant blame it, either... *I* would get confused). And when you want it to perform a task, you need to be descriptive... give it enough to work with. There's this whole big thing called "prompt engineering" which is very useful to learn for anyone that wants to use AI a lot, and... yeah that's a very wide topic in and of itself. I suppose it's just like learning to use any tool, really: you get the best results by taking the time to learn good technique.


The one super important thing is to realize that it CAN make mistakes. For the love of puppies DO NOT ask it medical questions, it is not a doctor. And also realize that if you're asking it something that is going to have it find some info on the internet, that's another way you can get incorrect info: after all, if the sites it is looking at are wrong about something? Well, OF COURSE the answers you get are going to be wrong too. So it's important to keep those things in mind.
 
I wanted to make an AI when I was eight years old or so, before I knew anything about it. In order to understand and gain control over the mind, I believed I must re-create it. I called the idea a "mechanical brain." I was very lonely and felt little personal agency at the time but did not comprehend that.

So far in my interactions with language models, they are very stupid and rudimentary. One of my favorites was a recipe generator, where I asked it to give me a recipe with "1 sweet potato" as the list of ingredients. It responded with a recipe titled "Sweet Potato Medley" which was a single sweet potato, cut up and roasted.

What people don't seem to understand is that the manipulation of human language is not equivalent to intelligence. It is an illusion; and perhaps the invention of long-distance communication has facilitated the strength of that illusion. However, it does not take much effort to break it if you try.

Joseph Weizenbaum, who ostensibly invented the first 'chat-bot,' said it succinctly:

“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

The more we interface with and become attached to machines over humans as "beings" rather than a tool, the more we dumb our own "beings" down to the level of a machine.
 
True, real-world A.I. does not operate on algorithmic code, as in, "If this, then that." but rather can make decisions based upon its environment and react on its own, and is learning on an exponential rate. An example of this would be the latest version of Tesla's full-self driving software version 12. The first 11 versions were based upon engineers writing code. This latest version is based upon machine learning via camera data from millions of Tesla vehicles on the road communicating through these massive, football field sized "Dojo" computers. There is similar amount of visual data being collected and being used to train the humanoid Optimus robots that will be released in 2025. Basically, it is a neural-net that learns like we do, not by engineers writing code.

Code is certainly written. You assume those engineers have a deep understanding about an organism learns, and have recreated that?
 
Code is certainly written. You assume those engineers have a deep understanding about an organism learns, and have recreated that?
A.I. engineers actually do have a deep understanding of how an organism learns. They spend quite a bit of their time on this as they are doing their best to mimic what biological organisms do.

Well with regards to Tesla's A.I. engineering, my understanding is that they basically threw out the old code and let the vehicle learn on its own from visual data from millions of vehicles over millions of miles of human driving. It was quite a deviation from what they were doing even a year ago. They realized that if the engineers were writing code, they would get stepwise improvements, then it would plateau again. They repeated this process many times before they realized no matter how many times they would tweak the software, it would plateau again. Finally, someone had the idea of just letting the computer learn on its own and it resulted in a dramatic improvement. They had a core group of beta testers to use the system and monitor it, reporting any human interventions, then opened it up to a larger group later, and just within the past month or so, opened it up to all FSD users. Most users are reporting no human interventions and operates with human-like smoothness, albeit with the reaction times that are orders of magnitude faster and able to have a 360* view around the vehicle. We'll see how it goes, but so far, so good.
 
Last edited:
There are good description below. They are based on predictive models, neural networks. But they are not really based on how biological organisms work. They are based on a simplistic and antiquated view of how neurons work, but the language and analogy still is in use even though it's accepted that that's not how our brain works. They are statistical networks that form structures that are good at making predictions. It's part art to build networks that are good at predictions, and they need to be "trained" with data in which you know what is the correct answer.


 

New Threads

Top Bottom