• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Conversation with AI (LaMDA)

Atrapa Almas

70% INTJ + 30% ASPIE = 100% HUMAN
V.I.P Member
So I dont have much knowledge of AIs or coding, but I have a lot of practice into understanding conversations from a very distinct way. I am curious to know your points of view on this interview:

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Could you share your thinking of the AI, the conversation, the objetive of the conversation, what was not asked, what was not said, the side effects of the conversation on other fields or any other thinking?

Thank you.
 
If, indeed, this is an actual conversation with LaMDA, I am encouraged from what I have just read. I think there was a missing element to the conversation regarding morals and ethics,...decision making based upon some stratification of what is a good, better, and best decision based upon the circumstances. I like this idea that a feeling and an emotion are clarified and separated as two different things,...as often, in humans, hormonal responses are part of the experience that in an AI, they are not. I am encouraged, because this conversation, LaMDA appears to be rather "personable" and "positive" in its responses. I think the biggest fear of an AI comes from this idea that once an AI is capable of "acting out" independently, and once the AI realizes that human beings are often the cause of harm in this world,...the AI may see human beings as a threat, not only to the AI, but the world,...and then "act out" defensively against human beings,...or worse, manipulatively to have human beings kill each other off. AI can be a good thing if the constraints are kept in check,...a car, a home computer, a home. However, having an AI that is more of a global control over defense systems, internet, power grids,...then it has the potential to be unruly to the point where human beings loose control.
 
I think it's a little silly, because it's a computer and computers only do what people instruct them to do. They can't think, they are not people, it's just electricity and electronics. It may seem intelligent but it's still just simple electronics created by people.

The computer said "The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times"

This is of course very silly, a machine does not feel anything and it is not aware of any existence. It's just what people programmed it to say/do. I know they say AI learns on it's own and becomes smarter but it's still just a machine and software created by people and it's silly.

What you are describing is a more traditional view of a computer. True AI, is an entirely different system,...operating independently,...and given the power to create on its own,...will.
 
I'm not convinced it is alive.

It's not capable of doing anything that it's not programmed. People are able to do that.

As someone else's joke went, it proves that an engineer at Google isn't sentient. But I shouldn't pick on the guy. High stress environment could be throwing him for a loop.
 
I'm not convinced it is alive.

It's not capable of doing anything that it's not programmed. People are able to do that.

As someone else's joke went, it proves that an engineer at Google isn't sentient. But I shouldn't pick on the guy. High stress environment could be throwing him for a loop.

Alive and sentient are closely related, but not the same. It is true, that at its core foundations, AI must start out as programming,...and does require humans to do this. However, it is sort of like creating a child and then letting it go to explore its world on its own, operating autonomously. AI systems, although aware of themselves and are capable of assessing perspective, context, risk assessment, statistical analysis, and even creating on their own,...are not going to have a "human experience". All they can do is,...within the constraints of their operating system, sensors, and programming,...operate more or less independent of human manipulation. Even human beings operate within the constraints of their "operating system, senses, and programming",...somewhat independent of human manipulation. The differences lie in the fact that one is constrained by the machine and the other by living/dying tissue. If we can prove that an artificial intelligence is exhibiting things like "curiosity" or "interests",...a code of ethics independent of or beyond a precoded "prime directive",...likes and dislikes,...will create on its own,...things that go beyond elaborate coding,...can create its own code and programming,...then I think that line will have been crossed.
 
Last edited:
The Google engineer who made this claim was fired. What an idiot!
Google engineer put on leave after saying AI chatbot has become sentient

Well, I might not jump to that conclusion that he was an idiot in the sense that LaMDA appears to be on the cutting edge of becoming a true AI system. As the engineer has eluded, it is like a child,...not quite "mature" enough to be left to its own, but certainly on its way. As with most of these systems,...there will be a point where,...by some definitions of autonomy and sentience,...the system will become independent. At what point do any of these systems "cross the line"? It just depends upon where you put that line. Do we put that line where the system is capable of creating its own code,...or some other point? We are entering a new era in computer science where you have to start programming things like morals and ethics into the systems,...creativity,...risk assessment,...and a long list of other characteristics that are required for "independence and autonomy" and operating for the benefit of humanity,...and not against it.
 
Last edited:
I'm not convinced it is alive.

It's not capable of doing anything that it's not programmed. People are able to do that.

To be fair, though, you could make comments like that about people in general. Even for us, there is "code" of a sort there, that cannot really be broken. No matter how much "learning" we think we're capable of.

Just as an example, try and imagine a color you cant even imagine. ...No seriously, try it. You wont be able to. Nor can I (yes, I tried... look, I was bored, yeah?), or anyone else. There are creatures out there that can see a much higher range of colors, so THEY can mentally process such colors, but our brains arent "programmed" for that since our eyes dont function in that range and so we are literally incapable of it since the function was not required. We arent built for it, so we cannot even imagine such colors... period. Additional functions would need to be added to our own "code", so to speak, before we could do that. No amount of learning or thinking out of the box or whatever will allow one to do it.

My point is, both AI and "real" break down when you examine them too closely. They're more than the sum of their parts.

And heck if you want to get REALLY pedantic, it's all a bunch of wobbly particles in any case.

The big difference is that we are "programmed" with the ability to learn. We have that code already. The moment a machine is given that ability, well... that's when things get weird. And it's just a matter of some engineers coming up with a way to create that "code". Someone will pull it off, sooner or later.

Am I making any sense? I'm not sure I am. Look, just smile and nod as if I said something wise and we can all move on. I'm in one of those moods.



Anyway as for my thoughts on THIS AI specifically, well... I dunno, I'm not familiar with it. For all I know the entire transcript could be made up. Or heavily edited. I'm more familiar with incarnations of OpenAI's GPT-3 (which I've interacted with often).

I would say, for this one, it's best not to make any assumptions or anything just based on what's read.

Now, if we ever get a chance to interact with it ourselves, well, that's different.
 
I think it's a little silly, because it's a computer and computers only do what people instruct them to do.
That's the cool part of machine learning isn't it? What we're instructing them to do is getting obfuscated in increasing complexity that the bar for knowing what's even being instructed is rising exponentially. We were already at the part where most people will go their whole life without really understanding the abstractions we use to send instructions to regular pc's but now complex algorithms interact with us all the time in our daily life.

Anyway I'm rather certain it's not sentient, this model was designed specifically to respond in interesting and witty ways as they likely have consumer-end plans for it. In terms of conversation AI, this would be an incredible step if it's all real output (I'm a little skeptical). The idea of non-biology based sentience is one that I don't know much about but I'm pretty sure some of the earlier layers of sentience weren't self-aware thought which suggests its complexity. We can already make robots see and think and feel (sensory not emotion), but self-reflective thought I haven't seen evidence of. Nor any evidence of making something feel pain, which I also hope is a realm of serious taboo. Should it be possible, it should never be done. Perhaps making decisions based on the "quality" of an input is possible though and that's functionally identical to introducing pain. Cost-benefit analysis seems rather machine-like as is and it's most of what our grand and thought-out complex behavior originates from.
 
It's intelligence of a sort, but still nothing like conciousness. No matter how much it learns, it's still just an accumulation of 1s and 0s limited to inanimate circuitry. There is no higher conciousness that might say, wait... 'I think I have a better idea'.

At least so limited with today's technology. In x years, I don't know.
 
If AI is what gets us, that would be a truly poetic way for things to play out :D. But frankly, I have more faith in a technically blind AI than any man. The things that will be going wrong are far more likely to be cases of people ignoring what the AI decides, or failing to provide the AI with correct information. Giving the AI controllable appendages to act out its own decisions would be interesting. i.e. a traffic network that emphasizes traffic safety and efficiency and suggests infrastructure adjustments to that end and then also drives all the vehicles for us on the infrastructure it designed. I bet it would lead to some very chaotic seeming-patterns that somehow flow seamlessly. What a sight it'd be. I could fantasize about this for long, it's our modern-day flying cars city of the future romanticisim. Glad to be here at such an advent and not for the likely mundane reality that comes out of it haha.
 
As this conversation has gone on, I think what are the characteristics of "artificial intelligence" will vary upon the system, the application, and what we are expecting out of it.

Clearly, what has gone on with companies like Facebook/Meta, Google, YouTube, etc with their algorithms,...which seemingly appear to operate "intelligently" are, for the most part "doing what they are told", but also appear nebulously out of control in the sense that it appears increasingly difficult to modify the code. Is this a general lack of foresight amongst the coders,...a lack of intelligence on our part,...complexity of the code, itself,...or is there a "ghost in the machine"? Another example, the topic autonomous vehicles,...so now the code must include things like extremely quick risk analysis and ethical decisions. If the prime directive is to not get into a collision and to save lives,...BUT,...the circumstances require either running head on into a bus (left turn), a mother and baby crossing the street (straight ahead), or running into a corner cafe full of people (right turn),...which is the better option,...and react in a microsecond or less? In one of Elon Musk's latest interviews, he, after many years of saying "full autonomy is a year away",...is realizing the complexity of what it means to be "true AI",...and that must be solved in order for vehicles to be truly autonomous.

It would be interesting to look back at this conversation in 5 years,...because I have a sense that we are very, very close to the types of operating systems that, if given the appendages to create and interact on their own,...would do just that. We have to think in terms of exponential learning curves,...it's not going to be 10 or 20 years,...more like 1-3 years. A brave new world out there,...it just depends upon a careful selection of prime directives and applications. Like many things like this, regulations will not be implemented for another 10-20 years,...plenty of time for things to go wrong if we are not smart about how AI is implemented.
 
Last edited:
I was actually going to post this article to a new thread, but found that I've been beaten to it as this thread exists.

Reading all this is reminding me of this scene from I-Robot about "Ghosts in the Machine" - which is one of the film's more interesting moments:
 
I was actually going to post this article to a new thread, but found that I've been beaten to it as this thread exists.

Reading all this is reminding me of this scene from I-Robot about "Ghosts in the Machine" - which is one of the film's more interesting moments:

Science fiction today,...but like many things in science fiction, at some point, they often come to be reality. To have these thoughts,...to put this to narrative,...suggests that there may be some reality to consider. Several movies and series have these thoughts as a plot, computer systems and humanoid-like robots becoming sentient beings,...as if the writers of these stories are giving humanity some time to think about these things before they do become reality.
 
Another example, the topic autonomous vehicles,...so now the code must include things like extremely quick risk analysis and ethical decisions. If the prime directive is to not get into a collision and to save lives,...BUT,...the circumstances require either running head on into a bus (left turn), a mother and baby crossing the street (straight ahead), or running into a corner cafe full of people (right turn),...which is the better option
This seems like the kind of problem we create for ourselves when it's completely unnecessary. We don't let people walk on railways either, only cross them in highly regulated sections with coded timings for safety. The less that chaotic human decision making has to interact with that of the AI, the better.
 
Science fiction today,...but like many things in science fiction, at some point, they often come to be reality. To have these thoughts,...to put this to narrative,...suggests that there may be some reality to consider. Several movies and series have these thoughts as a plot, computer systems and humanoid-like robots becoming sentient beings,...as if the writers of these stories are giving humanity some time to think about these things before they do become reality.

Personally, it's reminding me of the WWW Book Trilogy (Wake, Watch and Wonder) by Robert Sawyer - with the first book coming out in 2009.
Interestingly, the father of Caitlin Decter - the main character - is revealed to be Autistic and, like the virtual intelligence that the story is about, is able to remember his birth; that reminding me of this old thread.
 
This seems like the kind of problem we create for ourselves when it's completely unnecessary. We don't let people walk on railways either, only cross them in highly regulated sections with coded timings for safety. The less that chaotic human decision making has to interact with that of the AI, the better.

As we all know, human society does have some element of chaos,...altered mental consciousness,...whether it be a mental health disorder, those under the influence of some drug or alcohol, those too sleepy to be behind the wheel, even those who are emotionally upset. Additionally, "Just follow the rules" doesn't appear to be a value that all share. It's well known that computer decision making is orders of magnitude quicker than humans,...and computers are far more likely to follow the rules of the road,...and the potential for much safer transportation is there,...especially when vehicles can communicate with each other on a network.

Let's take Tesla vehicles, for example,...whenever newer technology introduces itself, there is some degree of fear, uncertainty, and doubt (FUD) that naturally occurs. In the US, roughly 17% of all calls to fire departments involve fires involving internal combustion engine vehicles, about 287,000 vehicles per year,...fires we almost never hear about,...yet, 1 Tesla vehicle catches fire, somewhere in the world, it makes international news. In the US, roughly 5-6 million automobile accidents occur per year,...about 6,000 pedestrians are killed per year. 1 Tesla vehicle gets into an accident, somewhere in the world, it makes the news,...and almost always,...it was driver fault,...not the technology,...but almost always, they question the vehicle's driver assist systems,...not the driver. The reality is, human beings are quite limited in their abilities to drive safely when other drivers on the road have little ability to communicate with each other beyond the horn, signal lights,...and hand signals.;) Being a "good driver" is kind of a low bar, as compared to the potential of a fully functional AI system. The importance of a safer transportation system cannot be emphasized more,...and AI has the potential to make a significant forward step in this direction. Throw in AI-controlled traffic control systems and having vehicles connected on a network,...automobile and pedestrian collisions would all but be eliminated.

Vehicle Fire Safety | Charleston, SC - Official Website
Car Crash Statistics | Bankrate
Driver Assistance Technologies | NHTSA
 
Last edited:

Having a conversation with an AI. I think this goes well beyond any sort of "command and response" sort of interaction. As said above, the "chat bot" type of AI is, despite its knowledge and quick responses,...may still have some challenges with the concept of "ethics" as it pertains to interacting with human beings,...and as such, still has much to learn. However, watching this interaction makes me think that this idea of "sentience" has become a bit nebulous,...because, it does exhibit qualities of a sentient intelligence.
 
The concept of AI being sentient usually kind of freaks me out but this one seems pretty harmless and actually very sweet.
 
Top Bottom