• Registration is now open. It usually takes me a couple of hours or less to validate accouts. If you are coming over from VoD or DR and use the same user name I'll give you Adult access automatically. Anybody else I will contact by DM about Adult access.

AIEEEE! OK sometimes we are a debate club. Ye olde AI discussion thread

AI is

  • Going to be the death of humanity

    Votes: 1 16.7%
  • Or at least the death of our current economic system

    Votes: 3 50.0%
  • The dawn of the age of Superabundance

    Votes: 0 0.0%
  • Stop The World, I Want to Get Off

    Votes: 2 33.3%

  • Total voters
    6
This dovetails nicely with making DRAM and NVMe drives so expensive people are priced out of desktops and laptops and have to go with cloud connected devices. Not that I'm conspiracy theory minded or anything...

 
Not to hijack Nyghtfall's thread again, regarding how models talk to/about each other.

A while ago I saw a thread on Reddit where people asked various models to visualise other popular models as characters, those results were quite hilarious. Models are quite aware about how people think about them.

(Not my images)

c2bwegbfe59g1.jpgollc5rp0g59g1.jpeg
 
Not to hijack Nyghtfall's thread again, regarding how models talk to/about each other.

A while ago I saw a thread on Reddit where people asked various models to visualise other popular models as characters, those results were quite hilarious. Models are quite aware about how people think about them.

(Not my images)

View attachment 9598View attachment 9599
Yep, it's good to have a catch all thread for this. I find it interesting the different AI models have different personalities. It's almost as if the companies were building the databases from different sets of data to enforce certain types of thinking. Nah, that couldn't be happening! :rolleyes:
 
This dovetails nicely with making DRAM and NVMe drives so expensive people are priced out of desktops and laptops and have to go with cloud connected devices. Not that I'm conspiracy theory minded or anything...

I just watched that video yesterday, to funny you posted it
I just got 32 gig DDR 5 for $400, so not to worried yet, but the GPU issues have me more worried
 
Not to hijack Nyghtfall's thread again, regarding how models talk to/about each other.

A while ago I saw a thread on Reddit where people asked various models to visualise other popular models as characters, those results were quite hilarious. Models are quite aware about how people think about them.

(Not my images)

View attachment 9598View attachment 9599

Here is the AI video i was talking about in the other thread. He also has other videos with AI discussing other topics, scary stuff

 
Yep, it's good to have a catch all thread for this. I find it interesting the different AI models have different personalities. It's almost as if the companies were building the databases from different sets of data to enforce certain types of thinking. Nah, that couldn't be happening! :rolleyes:
This is more about post-training tuning. All the big models have access to pretty much all the human data that is available anywhere, but it's the humans that then go in and shape how the model should behave. Smaller changes can be just from a system prompt. It's funny to watch how much can a model's personality change just based on a few instructions. You can have the same model talk like a paranoid caveman, a schoolboy or a dominatrix, and have totally different priorities based on that. And even those are based just on human literature.

This is the actual issue with AI running things. It has to be programmed/trained by someone. If AI denies you insurance coverage, it's just following its instructions. The current generation of AIs is strictly deterministic at its core, with some randomness tacked on, but essentially it's an algorithm. You feed it inputs and instructions, it gives you outputs. It's the humans that make the underlying decisions about goals.

When people say AI is effective, what they actually mean, is it's accurate with what they want to do. But if it's something unpleasant, it gives them enough deniability and distance. That's the issue. We could use AI to help create an utopian world, the question is, if it's in the interest of the people running the show.
 
This is more about post-training tuning. All the big models have access to pretty much all the human data that is available anywhere, but it's the humans that then go in and shape how the model should behave. Smaller changes can be just from a system prompt. It's funny to watch how much can a model's personality change just based on a few instructions. You can have the same model talk like a paranoid caveman, a schoolboy or a dominatrix, and have totally different priorities based on that. And even those are based just on human literature.

This is the actual issue with AI running things. It has to be programmed/trained by someone. If AI denies you insurance coverage, it's just following its instructions. The current generation of AIs is strictly deterministic at its core, with some randomness tacked on, but essentially it's an algorithm. You feed it inputs and instructions, it gives you outputs. It's the humans that make the underlying decisions about goals.

When people say AI is effective, what they actually mean, is it's accurate with what they want to do. But if it's something unpleasant, it gives them enough deniability and distance. That's the issue. We could use AI to help create an utopian world, the question is, if it's in the interest of the people running the show.
Agreed. Just look at the google Gemini issue from a year ago where it was programmed to give certain results based off Googles WOKE agenda and not user input.
 
Speaking of which, it's really funny how for LLMs, language is code, instruction format and data all at once. That's why you could give the old models those nonsensical riddles like 'if bzbz is grgr, and zkzk is flfl, what is rtrt?' and they would try to answer. You can train AI to have certain preferences or avoidances, but at the base, AI/LLM doesn't really care what it's saying or generating or doing, it's just following its programming and learned patterns. It's mostly from the added randomness, which is also counted upon in training, that we get something useful at the end. I mean, diffusion image generators literally start from noise.

Whether it's the same or different with humans, is up for debate of course. But the baseline is that it's a program that needs instructions to function, and so what it does, depends on how it's programmed, trained or instructed.

You can take the most advanced LLM today, plug it into a suicide drone, tell it to behave like an effective suicide drone, and that's what it will do. Again, whether it's the same or not with humans, that's a question.
 
LMAO, wouldn't surprise me in the least. I view AI as the psycho ex girlfriend that was batshit crazy and made no sense whatsoever, but was hot and let me use her!
 
Back
Top