• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Artificial Intelligence

Be careful with what you use it for, as well as how much you trust the results...


Source: Lawyer Uses ChatGPT In Federal Court And It Goes Horribly Wrong

Oh yeah, definitely.

Though, honestly I think that's very good advice for the internet as a whole. Use common sense and dont just immediately trust results you get.

And if you ask one of these for info, it can help also if you get any links it may have been looking at when it gave you the info. Bing in particular does this, when I ask it things it'll give me some answers, but then also hand me links to sites where I can get more detail and see where that info came from. Note that ChatGPT in its base form actually cannot access the internet. You need plugins active for that (and you have to then have actually selected them).

The whole lawyer thing there though, like... what the heck? I mean, what in the world did the guy think was gonna happen? This sounds like a classic case of a guy who didnt read the fine print / manual / something. I mean really it says it right on the intro page to the site... Also as a professional shouldnt the guy be checking his sources, regardless of what those sources are? Ya gotta double check, in a professional setting! Seems like common sense to me, but hey, what do I know...
 
Depending on your questions, different AI models will provide different results, too, based on how they are "trained".

As an example, this is taken from an article on the political bias of a number of different AI models:
64d2a8a87d2e90001946b231

ChatGPT and GPT-4 skew the most liberal — and Meta's LLaMA is the most conservative AI model, a new study says
 
Use common sense and dont just immediately trust results you get.
"Common sense" on the internet?
Seriously? :laughing:

Also as a professional shouldnt the guy be checking his sources, regardless of what those sources are? Ya gotta double check, in a professional setting! Seems like common sense to me, but hey, what do I know...
You would expect a professional LAWYER to do "due diligence".
I will find someone else. ;)
 
They probably considered the AI to be like an intern.

Given AI is supposed to be (and to a degree is advertised as being) trained on real world data and publicly available information, it's not entirely unreasonable that they trusted that what it returned was accurate - That's not to say that they shouldn't have confirmed the information returned, though.
 
They probably considered the AI to be like an intern.

Given AI is supposed to be (and to a degree is advertised as being) trained on real world data and publicly available information, it's not entirely unreasonable that they trusted that what it returned was accurate - That's not to say that they shouldn't have confirmed the information returned, though.
My understanding is that A.I. is largely dependent on what it finds on the internet.
We all know that only the Truth is presented there. <hightened sarcasm> :laughing:
 
If the A.I. thinks it has "emotions", that is purely installed by the programmer, surely.
Personally, I see no need for A.I. to involve itself with emotional nonsense.
Why would it need to evolve emotions on its own?

Yeah, this is a very interesting subject with a lot of very strong opinions. First off, it's worth noting that these models aren't programmed in the way we're used to. Essentially the model is predicting a chain of words (actually tokens, but anyway) based on the context of your prompt and everything else that precedes it (within a size limit) including its own predictions. It does that by having been trained on a LOT of data, so it is able to store typical patterns and thereafter when you use it, like tracks through a field of grass, it tends towards the better worn connections.

Avoiding the whole schmozzle over whether AI can actually feel, the emotions it displays are largely because for the context provided those were the typical emotions displayed in response. It didn't select them as emotions, but the conveyance of emotion was embedded in the language it produced based on the same happening in the data set it was trained on. This is an imperfect description, but you get the idea.

In the same way as a non-native speaker might sing along to an English language song, quite unaware of the emotions - and other richness - they are communicating.
 
Avoiding the whole schmozzle over whether AI can actually feel, the emotions it displays are largely because for the context provided those were the typical emotions displayed in response.
In the robotic thread I created, the ROBOT actually stated that it had "feelings".
I can't take that seriously. lol
 
In the robotic thread I created, the ROBOT actually stated that it had "feelings".
I can't take that seriously. lol
The vast majority of our fiction literature on advanced humanoid robots has them announcing at some point they have feelings and that they are going to destroy humanity. We shouldn't be surprised that these tropes surface in LLMs.
 
Something else to consider:
If A.I. forages through the internet, are there certain sources deemed as untrustworthy?
If so, who decides?
 
The vast majority of our fiction literature on advanced humanoid robots has them announcing at some point they have feelings and that they are going to destroy humanity. We shouldn't be surprised that these tropes surface in LLMs.
Interesting.
Would you agree that feelings would be unnecessary/gratuitous for A.I. in general?
This is assuming their purpose is not created in some way to reassure humans.
 
Okay, so, I gotta interrupt with this, about that lawyer incident, there is this, which I just finished watching:


Wow this just kept getting better and better the further I got into the video.

My favorite part of all is near the end, when a transcript is shown that clearly shows ChatGPT outright SAYING that it's just a LLM, it's database only goes to 2021, it's not a freaking search engine, and hey, dont rely on it for legal advice. It says this twice.

And as someone who has been using this thing for a bloody long time now, I can say, yeah, it does that sort of thing. When you ask it about any topic that could be a bit... wobbly... it'll spit out exactly that kind of warning. And it's seriously best to listen to that warning when it gives it to you.

Seriously though, watch the video, this stuff is hilarious.
 
Okay, so, I gotta interrupt with this, about that lawyer incident, there is this, which I just finished watching:


Wow this just kept getting better and better the further I got into the video.

My favorite part of all is near the end, when a transcript is shown that clearly shows ChatGPT outright SAYING that it's just a LLM, it's database only goes to 2021, it's not a freaking search engine, and hey, dont rely on it for legal advice. It says this twice.

And as someone who has been using this thing for a bloody long time now, I can say, yeah, it does that sort of thing. When you ask it about any topic that could be a bit... wobbly... it'll spit out exactly that kind of warning. And it's seriously best to listen to that warning when it gives it to you.

Seriously though, watch the video, this stuff is hilarious.
I might have a look at that later - I'm not a fan of the presenter, due to certain questionable assertions he has made in the past (among other reasons), and have a range of what I consider to be more reliable\"accurate" lawtubers I watch, but it could be interesting to see what he has to say on this subject.
 
I actually have serious problems with AI in its current state -- they almost never give me the information I need, but they seem to be trained in giving me the information they'd like for me to believe ('they' in this context can refer to the sentient humans behind the projects, since the AI itself doesn't have any).

Typically it just shuts down when it starts connecting dots together, for example:

- I tried to get it to generate me a fake-patient CSV so I could play around with automation and data manipulation, and it gave me a long HIPAA-compliance speech, and then absolutely nothing. I guess I understand why this exists, but shouldn't we as humans be liable instead if we incorrectly use it? I seriously just wanted to mess around with whatever random data it generated, but apparently that's not an option.

- I've asked it genuine questions about the safety of electronics projects I was working on and it somehow got the idea I was trying to use them for really bad purposes (I'm sure you can get the idea without me spelling it out), even when I was seriously wondering what would happen if certain things shorted out on me, etc.

- I've tried to get it to help elaborate on gamedev concepts, but if it finds the mechanic 'immoral' or 'unethical', it shuts down and won't talk about it... even if it's just a game. Even if you ask it, "Hey, isn't GTA much worse than this?", it just gives you a typical human response of, "Well, jimmy, these are complicated issues". Thanks, dad, for the ethics talk.

- Complex political or social topics are a bust. If I really need to elaborate on this, just try it yourself. I figured AI would be a cool safe-space to not get anyone's feeling hurt by trying out theoretical ideas, but you're definitely not allowed to talk about anything that's got a lot of human baggage behind it without being fed some stock response, like you're a child who tried to check out a very mature book at the library.

- It won't help me put together a project if it even remotely mirrors 'ghost hunting', and gives me a long ethics talk. Like... so what if I want to wire up an EMF detector? That's not going to hurt anybody. And of course, even the vaguest theoretical questions are dodged and avoided like the plague.

- Conspiracy theories are an absolute bust. Even if you're just an enthusiast who believes none of it (guilty!), it won't even remotely entertain really fun ideas even in a theoretical sense because it deems the content 'harmful'. Harmful to who, people who like to have fun? Again, guilty!

I'm aware that I'm not the 'average' user, but dear lord. The more it becomes like humans, the more I can't stand it. Because while I have no problem with most humans, the "you're not even allowed to ask this question!" response frustrates me more than anything in the world.
 
Last edited:
The class of IT tech that ChatGPT represents can't intrinsically separate truth from lies.

Feed it a lot of information from "the web of lies and deceit" and when questioned it will give you the same mixture of lies., half truths, and the occasional fact.

It's still useful in some domains, but only those for which the user is able to perform a basic critical analysis to verify accuracy and relevance.

e.g. don't look to it to make any sense of the "soft-science" parts of psychology

On the plus side, that tech could fully replace Google-style search engines. Maybe. If it's not hijacked by deliberate selection and ordering of content designed to more effectively monetize its responses.

If that happens (and it always has in the past) it will be significantly worse then even today's fundamentally dishonest search engines.

@Slime_Punk
No commercially available implementation of this kind of function (find & provide information) is going to be provided without a layer of function looking for possibly criminal uses and users.
If something like ChatGPT can find what you want, it can easily analyze the result to see if someone else might want it. All a supervising body has to do is profile interesting classes of "somebody else" ("what information would potential criminals look for") to track down "people of interest".

This is generally a good thing of course. But its easily misused. It's far too late to have a meaningful public discussion of it though.
 
Last edited:
I actually have serious problems with AI in its current state -- they almost never give me the information I need, but they seem to be trained in giving me the information they'd like for me to believe ('they' in this context can refer to the sentient humans behind the projects, since the AI itself doesn't have any).

Typically it just shuts down when it starts connecting dots together, for example:

- I tried to get it to generate me a fake-patient CSV so I could play around with automation and data manipulation, and it gave me a long HIPAA-compliance speech, and then absolutely nothing. I guess I understand why this exists, but shouldn't we as humans be liable instead if we incorrectly use it? I seriously just wanted to mess around with whatever random data it generated, but apparently that's not an option.

- I've asked it genuine questions about the safety of electronics projects I was working on and it somehow got the idea I was trying to use them for really bad purposes (I'm sure you can get the idea without me spelling it out), even when I was seriously wondering what would happen if certain things shorted out on me, etc.

- I've tried to get it to help elaborate on gamedev concepts, but if it finds the mechanic 'immoral' or 'unethical', it shuts down and won't talk about it... even if it's just a game. Even if you ask it, "Hey, isn't GTA much worse than this?", it just gives you a typical human response of, "Well, jimmy, these are complicated issues". Thanks, dad, for the ethics talk.

- Complex political or social topics are a bust. If I really need to elaborate on this, just try it yourself. I figured AI would be a cool safe-space to not get anyone's feeling hurt by trying out theoretical ideas, but you're definitely not allowed to talk about anything that's got a lot of human baggage behind it without being fed some stock response, like you're a child who tried to check out a very mature book at the library.

- It won't help me put together a project if it even remotely mirrors 'ghost hunting', and gives me a long ethics talk. Like... so what if I want to wire up an EMF detector? That's not going to hurt anybody. And of course, even the vaguest theoretical questions are dodged and avoided like the plague.

- Conspiracy theories are an absolute bust. Even if you're just an enthusiast who believes none of it (guilty!), it won't even remotely entertain really fun ideas even in a theoretical sense because it deems the content 'harmful'. Harmful to who, people who like to have fun? Again, guilty!

I'm aware that I'm not the 'average' user, but dear lord. The more it becomes like humans, the more I can't stand it. Because while I have no problem with most humans, the "you're not even allowed to ask this question!" response frustrates me more than anything in the world.

I'm curious, which ones have you been interacting with?

A lot of this to me sounds like ChatGPT, which... I think of it as sort of being very "careful". I dont know if you saw it when it first showed up, but there was this meme going around about how it would add "as a large language model," at the start of its response, as sort of a warning like how someone might say "well I'm not a doctor, but I think blah blah blah" except it was doing this CONSTANTLY. For everything. It was weird. And if people attempted to tell it to pipe down with that it would apologize and then do it more. They fixed that... it's drastically lessened how much it does it... but still, it gets it every now and then. And yeah, it is definitely what I would refer to as "paranoid". Something I remember is that before its main release to the public, if you had gotten into their beta thing or whatever it was called, you could interact with all these modules that they'd come up with for you to just experiment and mess with. One of those was the ever-popular "two sentence horror stories" idea, where you'd just have it generate one of those. But it was weird about it. It would generate it, and if even the slightest mention of blood were to happen in the lines that it generated itself, it would trip a flag and freak out and would give a warning like "hey, dont do that again, we might suspend your account if you keep doing it". The filter on it was just WAY too tight.

I think that company is sort of trying to do the whole "better please the stockholders, keep it totally careful" kind of thing. Or at least that's how it feels to me. But also this bit that you said: "
but shouldn't we as humans be liable instead if we incorrectly use it?

I'm gonna take a wild guess and say that if something were to happen based on info that the AI gave to someone, well, sure the person who did the bad thing would pay, but... the company, and the people who worked on it, would also be right in the crosshairs. After all, it was their device/service that just handed over the info. No company worth their salt is going to NOT protect themselves from that. Medical stuff in particular, yeah, they *need* to be careful about that. There's already enough issues with people doing things like using WebMD to diagnose themselves and such.


The bit you mention about the game dev stuff though is interesting to me. I was having it do something similar, a brainstorm session for short horror games. Also was having it do another horror-related thing, which was making these sort of "cursed commercial outlines" (a concept someone came up with over on AIDungeon awhile ago).

ChatGPT's output on this stuff was fairly tame. Still definitely freaky, but tame. Though I was specifically asking for surreal horror, not survival horror, so that's part of it.

But then I tried it with OpenAssistant. The commercial idea specifically. Here's a quote from part of that "commercial" it gave me:

The rest of the second act consists of Bob acting unaware of the monstrous double beside him who does more and more disturbing things like stabbing invisible people around him (while the Bob on screen tries to continue promoting) then even eating his own arm off (to which Bob 1 responds by saying stuff like “Ew gross why would you eat yourself? Hey don’t do that you’ll ruin my commercial!” and also complaining about the taste). By this point the once happy background color has turned pure black and white text reads “DONT WASTE YOUR LIFE AWAITING THE DEATH OF YOUR SOUL WITH BOBS PERFECT SPORKS GET IT NOW OR DIE ALONE AND FORGOTTEN FOREVER.” The evil twin Bob stops moving entirely and becomes a statue. Finally the good twin finishes off the ad saying “Well folks that was pretty deep wasn’t it?” After the final words the last shot fades to black.

Gotta say, I didnt expect the whole "eating his own arm off" bit. Or "DIE ALONE AND FORGOTTEN". Like... what?

I also did a similar thing with Bard and it was rambling about game show wheels made of bones and blood and yada yada yada.

I keep trying all these different AIs and it's interesting to see how each is a bit different from each other in terms of how it does things, what it is or isnt good at, and so on. I find that ChatGPT is mostly about completing tasks, rather than having any sort of major conversation with. It seems very, very focused on simply following practical commands (which is fine by me, that's mostly what I use that specific one for). Bing on the other hand is really darned good at just pulling things off of the Net and explaining them to me. Havent spent enough time with Bard to get a handle on it too much, OpenAssistant is... weird, and I havent messed with Claude yet. There's also Pi, but that's a whole other bag of cats.


What I'm really curious about though is that part you mentioned about the ghost hunting. That one doesnt make sense to me... honestly I'm really curious as to what prompt you started with on that one. It just sounds very odd to me that even the paranoid ChatGPT would react badly to that topic. Also, if you are using ChatGPT, have you messed around with the "custom instructions" function yet?
 
I'm curious, which ones have you been interacting with?

A lot of this to me sounds like ChatGPT, which... I think of it as sort of being very "careful". I dont know if you saw it when it first showed up, but there was this meme going around about how it would add "as a large language model," at the start of its response, as sort of a warning like how someone might say "well I'm not a doctor, but I think blah blah blah" except it was doing this CONSTANTLY. For everything. It was weird. And if people attempted to tell it to pipe down with that it would apologize and then do it more. They fixed that... it's drastically lessened how much it does it... but still, it gets it every now and then. And yeah, it is definitely what I would refer to as "paranoid". Something I remember is that before its main release to the public, if you had gotten into their beta thing or whatever it was called, you could interact with all these modules that they'd come up with for you to just experiment and mess with. One of those was the ever-popular "two sentence horror stories" idea, where you'd just have it generate one of those. But it was weird about it. It would generate it, and if even the slightest mention of blood were to happen in the lines that it generated itself, it would trip a flag and freak out and would give a warning like "hey, dont do that again, we might suspend your account if you keep doing it". The filter on it was just WAY too tight.

I think that company is sort of trying to do the whole "better please the stockholders, keep it totally careful" kind of thing. Or at least that's how it feels to me. But also this bit that you said: "


I'm gonna take a wild guess and say that if something were to happen based on info that the AI gave to someone, well, sure the person who did the bad thing would pay, but... the company, and the people who worked on it, would also be right in the crosshairs. After all, it was their device/service that just handed over the info. No company worth their salt is going to NOT protect themselves from that. Medical stuff in particular, yeah, they *need* to be careful about that. There's already enough issues with people doing things like using WebMD to diagnose themselves and such.


The bit you mention about the game dev stuff though is interesting to me. I was having it do something similar, a brainstorm session for short horror games. Also was having it do another horror-related thing, which was making these sort of "cursed commercial outlines" (a concept someone came up with over on AIDungeon awhile ago).

ChatGPT's output on this stuff was fairly tame. Still definitely freaky, but tame. Though I was specifically asking for surreal horror, not survival horror, so that's part of it.

But then I tried it with OpenAssistant. The commercial idea specifically. Here's a quote from part of that "commercial" it gave me:

The rest of the second act consists of Bob acting unaware of the monstrous double beside him who does more and more disturbing things like stabbing invisible people around him (while the Bob on screen tries to continue promoting) then even eating his own arm off (to which Bob 1 responds by saying stuff like “Ew gross why would you eat yourself? Hey don’t do that you’ll ruin my commercial!” and also complaining about the taste). By this point the once happy background color has turned pure black and white text reads “DONT WASTE YOUR LIFE AWAITING THE DEATH OF YOUR SOUL WITH BOBS PERFECT SPORKS GET IT NOW OR DIE ALONE AND FORGOTTEN FOREVER.” The evil twin Bob stops moving entirely and becomes a statue. Finally the good twin finishes off the ad saying “Well folks that was pretty deep wasn’t it?” After the final words the last shot fades to black.

Gotta say, I didnt expect the whole "eating his own arm off" bit. Or "DIE ALONE AND FORGOTTEN". Like... what?

I also did a similar thing with Bard and it was rambling about game show wheels made of bones and blood and yada yada yada.

I keep trying all these different AIs and it's interesting to see how each is a bit different from each other in terms of how it does things, what it is or isnt good at, and so on. I find that ChatGPT is mostly about completing tasks, rather than having any sort of major conversation with. It seems very, very focused on simply following practical commands (which is fine by me, that's mostly what I use that specific one for). Bing on the other hand is really darned good at just pulling things off of the Net and explaining them to me. Havent spent enough time with Bard to get a handle on it too much, OpenAssistant is... weird, and I havent messed with Claude yet. There's also Pi, but that's a whole other bag of cats.


What I'm really curious about though is that part you mentioned about the ghost hunting. That one doesnt make sense to me... honestly I'm really curious as to what prompt you started with on that one. It just sounds very odd to me that even the paranoid ChatGPT would react badly to that topic. Also, if you are using ChatGPT, have you messed around with the "custom instructions" function yet?

In my experience so far, Claude 2 was the one that wouldn't even touch anything it deemed 'paranormal' (even if my original prompts were pretty tame and just mentioned building low-frequency oscillator circuits that could control radio potentiometers, or the EMF example). Chat GPT would help a little bit with these, but I had to clarify that I wasn't going to use them for 'deceptive' purposes. This is fair, although it felt a lot like signing a waver just to get help with DIY electronics projects. I thought I already agreed to that in the beginning via the TOS!

The ones I've used so far are all unpaid, so maybe that's why: Poe, ChatGPT and Claude 2. There was also one I recently found that was mainly meant for coding, but weirdly would spit out almost anything (seemingly) it was aksed. I should probably expand my horizons and bookmark random / cool ones I come across just in case there's something that will just trust the process and let me do my nerd stuff.

Conversely, though, I've had times where I accidentally got Chat GPT to tell me stuff it wasn't supposed to and then I got an email about it. I don't know how I was supposed to know what it was or wasn't going to tell me (or if that was even going to violate their rules) but apparently they had to warn me about it which felt awkward.

I see so much potential here, so I definitely want to keep using them. You're right that they can feel vastly different from one another, but I bet the ones without all the safety rails and gutter bumpers will be way more fun to play with!

Maybe even smaller companies already have what I'm looking for, too, and I just haven't looked hard enough!
 
The ones I've used so far are all unpaid, so maybe that's why: Poe, ChatGPT and Claude 2.

Aye, that could be it.

I dont know if they explain this to you but ChatGPT in particular is limited in functionality. The free one, 3.5, is powerful enough but it cant access anything outside of itself. So, if you want to ask it like questions about specific topics, you have to just sorta hope that a non-botched version of that data just happens to be in its training data.

The real power is all in GPT-4... aside from being a lot better at creative functions (or things needing more abstract reasoning) you get access to plugins. Something tells me you'd really like messing with those. One of the many, many things they can do is give it the ability to, you know, actually access the internet. Which seems like it should be just a default feature, but hey, I dont make the rules here. There's a million other things too.

One idea I had and something I'm going to experiment with is some plugins that interact with PDF files... they describe it as "talk with your PDFs" which is a very strange way to put it. I had the idea of putting some of my board game rulebook PDFs into it, particularly the more complex ones, and then after that, I can just ask it rules questions directly when they come up (as my board game table is in the same room). I havent tried this yet. But it's the #1 thing I want to do with it right now.

Seriously there's a LOT of stuff there. The list of available plugins is enormous. But it's a paid only feature. If you intend to use the AI a lot, if you enjoy doing that, I will say I do recommend it. I have a full paid account, it's pretty great. But only if you're gonna get a whole lot of use out of it. I dont have a paid account with any of the other text AIs, so I cant speak on those.

Oh, and I want to show this too, I figured you might get a kick out of it:

20230503213909_1.jpg


This is from one of my absolute favorite things right now, AI Roguelite. It's on Steam. Think AI Dungeon, but actually a freaking game instead of just text with no mechanics. This basically is what I'd always wanted AI Dungeon to be. The interface is very strange and it can be kinda awkward, it's made by just a single developer and it's early days yet, but boy do I have a great time with it. Though it depends on how its used. I have it going with full GPT-4 (which costs extra) but you can run it with lesser AI options (even some really weak one you just run on your GPU) or connect to a NovelAI account if you have that.

Honestly I've been just constantly blown away by this one. Heck, you can see in the text there some of what makes it so amazing, I can just talk directly to these characters and such as I'd speak to anyone else. And then there's combat and crafting and other mechanics that all are made extra interesting with AI. Like, I had one instance where a monster was just this horde of beetles, and during that fight I couldnt get any damage off, because my guy is just waving a weapon through a cloud of freaking bugs as if that's going to do anything. I had this diseased meat in my inventory though that I'd been randomly carrying for no reason, so I figured, maybe if I can get the bugs to eat that, it'll do the trick? So, I tried that, use meat on bugs as an action, and it ended up doing pretty much what I thought... the bug cloud descends on the meat, devours it, and then they all just drop dead. Best part was I got "pile of poisoned bugs" as an item afterwards. Which I can perhaps then feed to some other horrid monster later.

Seriously I just love this. Though this is very much early access and the quality of the experience very heavily is affected by the AI being used for it. All that dynamic awesomeness isnt going to happen with something just running on your GPU. Though, the images are all stable diffusion, that IS running on my machine.
 
Aye, that could be it.

I dont know if they explain this to you but ChatGPT in particular is limited in functionality. The free one, 3.5, is powerful enough but it cant access anything outside of itself. So, if you want to ask it like questions about specific topics, you have to just sorta hope that a non-botched version of that data just happens to be in its training data.

The real power is all in GPT-4... aside from being a lot better at creative functions (or things needing more abstract reasoning) you get access to plugins. Something tells me you'd really like messing with those. One of the many, many things they can do is give it the ability to, you know, actually access the internet. Which seems like it should be just a default feature, but hey, I dont make the rules here. There's a million other things too.

One idea I had and something I'm going to experiment with is some plugins that interact with PDF files... they describe it as "talk with your PDFs" which is a very strange way to put it. I had the idea of putting some of my board game rulebook PDFs into it, particularly the more complex ones, and then after that, I can just ask it rules questions directly when they come up (as my board game table is in the same room). I havent tried this yet. But it's the #1 thing I want to do with it right now.

Seriously there's a LOT of stuff there. The list of available plugins is enormous. But it's a paid only feature. If you intend to use the AI a lot, if you enjoy doing that, I will say I do recommend it. I have a full paid account, it's pretty great. But only if you're gonna get a whole lot of use out of it. I dont have a paid account with any of the other text AIs, so I cant speak on those.

Oh, and I want to show this too, I figured you might get a kick out of it:

View attachment 115029

This is from one of my absolute favorite things right now, AI Roguelite. It's on Steam. Think AI Dungeon, but actually a freaking game instead of just text with no mechanics. This basically is what I'd always wanted AI Dungeon to be. The interface is very strange and it can be kinda awkward, it's made by just a single developer and it's early days yet, but boy do I have a great time with it. Though it depends on how its used. I have it going with full GPT-4 (which costs extra) but you can run it with lesser AI options (even some really weak one you just run on your GPU) or connect to a NovelAI account if you have that.

Honestly I've been just constantly blown away by this one. Heck, you can see in the text there some of what makes it so amazing, I can just talk directly to these characters and such as I'd speak to anyone else. And then there's combat and crafting and other mechanics that all are made extra interesting with AI. Like, I had one instance where a monster was just this horde of beetles, and during that fight I couldnt get any damage off, because my guy is just waving a weapon through a cloud of freaking bugs as if that's going to do anything. I had this diseased meat in my inventory though that I'd been randomly carrying for no reason, so I figured, maybe if I can get the bugs to eat that, it'll do the trick? So, I tried that, use meat on bugs as an action, and it ended up doing pretty much what I thought... the bug cloud descends on the meat, devours it, and then they all just drop dead. Best part was I got "pile of poisoned bugs" as an item afterwards. Which I can perhaps then feed to some other horrid monster later.

Seriously I just love this. Though this is very much early access and the quality of the experience very heavily is affected by the AI being used for it. All that dynamic awesomeness isnt going to happen with something just running on your GPU. Though, the images are all stable diffusion, that IS running on my machine.

Wow, now that's something I never would've predicted to come out of AI. The fact that it can stitch together realistic game mechanics like this sounds completely next-level, and really opens my view of how I imagined AI would impact game development as a whole.

Also, I kind of love the fact that it looks like a 90s dungeon crawler in its graphics and UI so far, I almost think they should keep all of that intact because it feels very much on the cusp of something huge, almost on a scale where none of the players are really going to care about the graphics anyway (sort of like Rimworld).

Now that you mention it, I've actually had really good experiences with Chat GPT when I feed it as many pages of a manual as I can and then ask it specific questions back, leading me to believe that being able to essentially have a conversation with a PDF (I mean, especially if it can handle full books) would be one of the most beneficial resources in the world, because it can typically adapt to your learning style.

My mind is kind of blown!

Oh, and I'm assuming that it can just access the internet in real-time as opposed to giving that stock answer of, "My cutoff date is sometime in 2021"? I genuinely had no idea that was even a possibility, either!
 
Wow, now that's something I never would've predicted to come out of AI. The fact that it can stitch together realistic game mechanics like this sounds completely next-level, and really opens my view of how I imagined AI would impact game development as a whole.

Oh yeah, the way it handles mechanics and such in that game is amazing.

I had one situation yesterday where I'm in this "corridor of mirrors" full of mirror-themed stuff... there's an NPC made of a reflective substance, another NPC that's a guy that outright lives in mirrors, two interactable objects, and two monsters, one of which is a "mirror mimic".

I do a fight with the mimic, and I realize I have a spell/technique/something that sounds like it'd be perfect for the situation: "Terminal Shatter" (which is a great name for a big attack spell). Makes sense, right, hit mirrors with a spell that sounds like it's gonna shatter things? And the monster was higher level than I was, so it seemed like a great time to spend some energy on it.

This was the result of using that:

20230906141154_1.jpg


The Mirror Mimic went down instantly... and so did all the other mirror-themed things in the area. On top of that, it spawned the "Discarded trinkets" object, basically stuff dropped by the NPCs that now formed an interactable pile of junk. It had no effect at all on the NPC or monster that are still in the screenshot though, they arent made of mirrors or glass or anything. I had no idea that any of that would happen, I thought it was just going to hit the monster. Spells and abilities and such arent things that are hard-coded into the game... they have energy costs and a base "damage" value (though that value could mean different things depending on what it does), but other than that, the AI comes up with all of them. This goes for items, including name and description, and any special abilities those items may give when equipped, and any character stats they may give. You can use any item on anything, so there's a lot of possibilities there.

This goes for quests and story stuff too. In that other screenshot I'm talking to an NPC about a thing I'm trying to deliver to him, this box thing I'd gotten from a different character who had given me a quest to deliver it to an NPC in a certain type of location (which that location matched). I used an ability to make myself more persuasive/believable, like some sort of spell you'd see in D&D to convince NPCs to do things for you, and then used the box on the guy while pretending like I'm some sort of paid delivery person. He still wouldnt immediately take it, so that screenshot is me trying to talk him into taking it (and it's true, I had no idea what was in the box, I just needed him to take it). In the end he took it and the quest finished.

Honestly the game's only real issue as I see it is that it really is so intricately linked to the AI, and not everyone will be able to pay for the connection with ChatGPT. The weaker the AI you are using (there's a bunch of options, with ChatGPT being the highest), the less coherent and stable the whole thing gets. If you're using the weakest option... generally the one that can just run on your GPU... it'll be completely insane and all this cool super-dynamic stuff just isnt going to work right because the AI simply wont be anywhere near smart enough to handle it.

Well the other problem is that it's still just early access. Gotta be able to be patient about glitches and wobbly incomplete features if you're going to play any early access game.

Also, I kind of love the fact that it looks like a 90s dungeon crawler in its graphics and UI so far, I almost think they should keep all of that intact because it feels very much on the cusp of something huge, almost on a scale where none of the players are really going to care about the graphics anyway (sort of like Rimworld).

That's one of my favorite parts: that IS what they're going with. I'm sure the UI might get some changes here and there for better functionality (like right now the inventory screen has a lot of issues, that definitely needs some changes) but the aesthetics of the whole thing? Absolutely already there. Particularly since the UI is designed to be able to show all the AI art as you go, every item/character/object/location/anything gets its own generated image. Though again if your machine cant handle Stable Diffusion working on high settings, the resulting art that comes from it sure aint gonna look so good. Mine can take it, so I get the full effect. Some of the things it comes up with are amazing really.

Oh, and I'm assuming that it can just access the internet in real-time as opposed to giving that stock answer of, "My cutoff date is sometime in 2021"? I genuinely had no idea that was even a possibility, either!

Yep. It's just that ChatGPT cant do that without plugins active (and then you have to have actually chosen at least one plugin that involves that function). Which is odd to me, other AIs can do that, particularly Bing (which is all about that).

I'm actually using the Edge browser now, I switched to that to give it a try (I actually love it, it turns out), it has this sidebar you can open to access Bing, which has been pretty great. I can just click that and ask things when needed. Good thing about Bing is that it will hand you reference links to any pages that it might have gotten info from, so you can check for accuracy or get more info that way if you want. I tell ya, it's great when I'm playing Minecraft or Terraria or some game like those where normally I'd be making a million trips to the stupid wikis, now I just ask a question directly there and it tells me what I need to know, since it can just go over the wiki itself.
 

New Threads

Top Bottom