• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

Artificial Intelligence

Slime_Punk

‌Lord of the slimes
V.I.P Member
That's one of my favorite parts: that IS what they're going with. I'm sure the UI might get some changes here and there for better functionality (like right now the inventory screen has a lot of issues, that definitely needs some changes) but the aesthetics of the whole thing? Absolutely already there. Particularly since the UI is designed to be able to show all the AI art as you go, every item/character/object/location/anything gets its own generated image. Though again if your machine cant handle Stable Diffusion working on high settings, the resulting art that comes from it sure aint gonna look so good. Mine can take it, so I get the full effect. Some of the things it comes up with are amazing really.

Oh, that makes sense! I remember trying SD with just a basic frontend that somebody made and 500x500 images were taking like >5 minutes to generate! That made me realize pretty much immediately why most services aren't free, or why free tiers of things like leonardo AI are so limited!

Honestly the game's only real issue as I see it is that it really is so intricately linked to the AI, and not everyone will be able to pay for the connection with ChatGPT. The weaker the AI you are using (there's a bunch of options, with ChatGPT being the highest), the less coherent and stable the whole thing gets. If you're using the weakest option... generally the one that can just run on your GPU... it'll be completely insane and all this cool super-dynamic stuff just isnt going to work right because the AI simply wont be anywhere near smart enough to handle it.

Well the other problem is that it's still just early access. Gotta be able to be patient about glitches and wobbly incomplete features if you're going to play any early access game.

In that case I might have to watch some videos of it for now, but that's good to know! I would've probably preemptively pulled the trigger on it at some point and got super bummed out that things were either lackluster for me and / or much better for others without really putting it all together!

I tell ya, it's great when I'm playing Minecraft or Terraria or some game like those where normally I'd be making a million trips to the stupid wikis, now I just ask a question directly there and it tells me what I need to know, since it can just go over the wiki itself.

I've always had issues with this, too! In one of them (Junk Jack, a Terraria knockoff) you actually get penalized for using quick crafting, and get a few extra inventory slots if you just look up the recipes... but browsing through pages gets annoying and it would be so much easier to just have it all on command like that!

You know what would be a potential solution to slow SD loading times? Something that looks a little more like Caves of Qud / Dwarf Fortress, but with the guts of AI Roguelite. I feel like they could maybe scale it down even further for those of us who are working with burnt potatoes :D
 

Forest Cat

Well-Known Member
V.I.P Member
I received an interesting mail/newsletter from Protonmail that said this about AI:

"Is there any way to use ChatGPT and other large language models (LLMs) in a way that respects privacy?

Strictly speaking, the answer is no. LLMs are trained on hundreds of gigabytes of text despite never obtaining permission to use that data for that purpose. Therefore privacy violations are baked into the system. On top of that, everything you say to the LLM is just adding to the pile of data it learns from and recombines later. This is the reason multiple companies, from Apple to Bank of America, have restricted their employees’ use of ChatGPT.

If you still want to use ChatGPT, the best advice is to avoid sharing any personal data whatsoever, including when creating your account. Ultimately, privacy and AI don’t mix, and that’s according to ChatGPT itself. Here’s what ChatGPT said when we asked how to stay private on the platform: “Using large language models privately can be challenging due to the computational resources required and the centralized nature of the models”."
 
Last edited:

Misery

Amalga Heart
V.I.P Member
Oh, that makes sense! I remember trying SD with just a basic frontend that somebody made and 500x500 images were taking like >5 minutes to generate! That made me realize pretty much immediately why most services aren't free, or why free tiers of things like leonardo AI are so limited!



In that case I might have to watch some videos of it for now, but that's good to know! I would've probably preemptively pulled the trigger on it at some point and got super bummed out that things were either lackluster for me and / or much better for others without really putting it all together!



I've always had issues with this, too! In one of them (Junk Jack, a Terraria knockoff) you actually get penalized for using quick crafting, and get a few extra inventory slots if you just look up the recipes... but browsing through pages gets annoying and it would be so much easier to just have it all on command like that!

You know what would be a potential solution to slow SD loading times? Something that looks a little more like Caves of Qud / Dwarf Fortress, but with the guts of AI Roguelite. I feel like they could maybe scale it down even further for those of us who are working with burnt potatoes :D

Huh. The SD generation time thing is interesting. They take a few seconds on mine. All of those images in there generate at 512x512 with 40 iterations, every single item, NPC, monster, object, location, ability and basically anything that is conceivably interactable gets one (so, jumping into a new area means it's going to generate at least 6-8 new things at once, or doing something like interacting with a vendor might generate like 15 of them). But they take a few seconds each for me. And yeah I know they dont all look at that size but if you hover over anything you can see the image in question fully zoomed in, already generated. There are different presets you can use in the config menu but that's how mine is. That's all local generation, mind you, it's not pulling them from elsewhere.

The good thing is that even if it aint generating at light speed like that the different objects can be fully interacted with even if they dont have an image yet, and you can see which one is currently being built up. So, no need to wait. But I dont have to wait on mine anyway. This machine was set up mainly to render fractals, so it bloody well better be able to handle quick SD generations.

Though, waiting FIVE FREAKING MINUTES for one of these SD images... honestly I've gotten so used to the hyper quick generation that I'd forgotten how slow it COULD be for others.

Also you're right, going with something that looks like Qud or DF would absolutely fit.

Actually, have you ever seen a roguelike called Cogmind? That sort of look, that's what comes to mind when I think of a game that's all about doing loopy things with AI.
 

Misery

Amalga Heart
V.I.P Member
I received an interesting mail/newsletter from Protonmail that said this about AI:

"Is there any way to use ChatGPT and other large language models (LLMs) in a way that respects privacy?

Strictly speaking, the answer is no. LLMs are trained on hundreds of gigabytes of text despite never obtaining permission to use that data for that purpose. Therefore privacy violations are baked into the system. On top of that, everything you say to the LLM is just adding to the pile of data it learns from and recombines later. This is the reason multiple companies, from Apple to Bank of America, have restricted their employees’ use of ChatGPT.

If you still want to use ChatGPT, the best advice is to avoid sharing any personal data whatsoever, including when creating your account. Ultimately, privacy and AI don’t mix, and that’s according to ChatGPT itself. Here’s what ChatGPT said when we asked how to stay private on the platform: “Using large language models privately can be challenging due to the computational resources required and the centralized nature of the models”."

Oh yeah, definitely.

The way I always think about it is this: If I wouldnt do/say it on a public forum like this one (or Reddit or whatever), I shouldnt do/say it when dealing with any of the AIs.

Always be careful on the internet, folks. AI, forum, or otherwise.
 

MNAus

Well-Known Member
I received an interesting mail/newsletter from Protonmail that said this about AI:

"Is there any way to use ChatGPT and other large language models (LLMs) in a way that respects privacy?

Strictly speaking, the answer is no. LLMs are trained on hundreds of gigabytes of text despite never obtaining permission to use that data for that purpose. Therefore privacy violations are baked into the system. On top of that, everything you say to the LLM is just adding to the pile of data it learns from and recombines later. This is the reason multiple companies, from Apple to Bank of America, have restricted their employees’ use of ChatGPT.

If you still want to use ChatGPT, the best advice is to avoid sharing any personal data whatsoever, including when creating your account. Ultimately, privacy and AI don’t mix, and that’s according to ChatGPT itself. Here’s what ChatGPT said when we asked how to stay private on the platform: “Using large language models privately can be challenging due to the computational resources required and the centralized nature of the models”."
This is a good rule of thumb, but not strictly true with respect your own data. IIRC OpenAI don't use your prompts for training when you use the API, and there are options to restrict this in the ChatGPT front-end. That said, posting confidential data is likely to be a breach of privacy agreements anyway, as you are, technically, placing the data on a third party server even if it's not actually viewed by anyone else. We're also seeing most vendors come up with products that preserve privacy, but again it's aimed at corporate customers. Finally you could always run your own LLM on your own machine. Not as ridiculous as it sounds.

On the subject of using other people's data to train. This is a very tricky area. Lots of angles on this. I guess a starting point is the old adage that if you're not paying, you're the product. That's been the case for a LONG time with social media. On the broader topic, I think this is happening at the wrong level of discussion. Though there are instances of individual's work being aped which raises questions, that is better handled on the output side. E.g. if it's producing something with a Mickey Mouse logo it is likely infringing on Disney copyright. But on the inputs being used. That's something that's ALWAYS happened. Bands even have notes on their influences. So I think the question should be "what's different here?" and the resulting discussions are much more profound than just who gets to use which data for what payment. Lots of stuff here around "what happens when the baton is handed to AI?" Some around HOW that happens (as in, is it OK for a bunch of individuals to profit from the summation of humanity's history) and some around the actual event itself (how do we feel with the idea that the work you influence is produced by a machine?)

It's a fascinating topic.
 

New Threads

Top Bottom