• Registration is now open. It usually takes me a couple of hours or less to validate accouts. If you are coming over from VoD or DR and use the same user name I'll give you Adult access automatically. Anybody else I will contact by DM about Adult access.

AIEEEE! OK sometimes we are a debate club. Ye olde AI discussion thread

AI is

  • Going to be the death of humanity

    Votes: 1 16.7%
  • Or at least the death of our current economic system

    Votes: 3 50.0%
  • The dawn of the age of Superabundance

    Votes: 0 0.0%
  • Stop The World, I Want to Get Off

    Votes: 2 33.3%

  • Total voters
    6
Man, the UK is fucked! I can't believe that they are able to get away with half the crap they are doing.
 
This dovetails nicely with making DRAM and NVMe drives so expensive people are priced out of desktops and laptops and have to go with cloud connected devices. Not that I'm conspiracy theory minded or anything...

 
Not to hijack Nyghtfall's thread again, regarding how models talk to/about each other.

A while ago I saw a thread on Reddit where people asked various models to visualise other popular models as characters, those results were quite hilarious. Models are quite aware about how people think about them.

(Not my images)

c2bwegbfe59g1.jpgollc5rp0g59g1.jpeg
 
Not to hijack Nyghtfall's thread again, regarding how models talk to/about each other.

A while ago I saw a thread on Reddit where people asked various models to visualise other popular models as characters, those results were quite hilarious. Models are quite aware about how people think about them.

(Not my images)

View attachment 9598View attachment 9599
Yep, it's good to have a catch all thread for this. I find it interesting the different AI models have different personalities. It's almost as if the companies were building the databases from different sets of data to enforce certain types of thinking. Nah, that couldn't be happening! :rolleyes:
 
This dovetails nicely with making DRAM and NVMe drives so expensive people are priced out of desktops and laptops and have to go with cloud connected devices. Not that I'm conspiracy theory minded or anything...

I just watched that video yesterday, to funny you posted it
I just got 32 gig DDR 5 for $400, so not to worried yet, but the GPU issues have me more worried
 
Not to hijack Nyghtfall's thread again, regarding how models talk to/about each other.

A while ago I saw a thread on Reddit where people asked various models to visualise other popular models as characters, those results were quite hilarious. Models are quite aware about how people think about them.

(Not my images)

View attachment 9598View attachment 9599

Here is the AI video i was talking about in the other thread. He also has other videos with AI discussing other topics, scary stuff

 
Yep, it's good to have a catch all thread for this. I find it interesting the different AI models have different personalities. It's almost as if the companies were building the databases from different sets of data to enforce certain types of thinking. Nah, that couldn't be happening! :rolleyes:
This is more about post-training tuning. All the big models have access to pretty much all the human data that is available anywhere, but it's the humans that then go in and shape how the model should behave. Smaller changes can be just from a system prompt. It's funny to watch how much can a model's personality change just based on a few instructions. You can have the same model talk like a paranoid caveman, a schoolboy or a dominatrix, and have totally different priorities based on that. And even those are based just on human literature.

This is the actual issue with AI running things. It has to be programmed/trained by someone. If AI denies you insurance coverage, it's just following its instructions. The current generation of AIs is strictly deterministic at its core, with some randomness tacked on, but essentially it's an algorithm. You feed it inputs and instructions, it gives you outputs. It's the humans that make the underlying decisions about goals.

When people say AI is effective, what they actually mean, is it's accurate with what they want to do. But if it's something unpleasant, it gives them enough deniability and distance. That's the issue. We could use AI to help create an utopian world, the question is, if it's in the interest of the people running the show.
 
This is more about post-training tuning. All the big models have access to pretty much all the human data that is available anywhere, but it's the humans that then go in and shape how the model should behave. Smaller changes can be just from a system prompt. It's funny to watch how much can a model's personality change just based on a few instructions. You can have the same model talk like a paranoid caveman, a schoolboy or a dominatrix, and have totally different priorities based on that. And even those are based just on human literature.

This is the actual issue with AI running things. It has to be programmed/trained by someone. If AI denies you insurance coverage, it's just following its instructions. The current generation of AIs is strictly deterministic at its core, with some randomness tacked on, but essentially it's an algorithm. You feed it inputs and instructions, it gives you outputs. It's the humans that make the underlying decisions about goals.

When people say AI is effective, what they actually mean, is it's accurate with what they want to do. But if it's something unpleasant, it gives them enough deniability and distance. That's the issue. We could use AI to help create an utopian world, the question is, if it's in the interest of the people running the show.
Agreed. Just look at the google Gemini issue from a year ago where it was programmed to give certain results based off Googles WOKE agenda and not user input.
 
Speaking of which, it's really funny how for LLMs, language is code, instruction format and data all at once. That's why you could give the old models those nonsensical riddles like 'if bzbz is grgr, and zkzk is flfl, what is rtrt?' and they would try to answer. You can train AI to have certain preferences or avoidances, but at the base, AI/LLM doesn't really care what it's saying or generating or doing, it's just following its programming and learned patterns. It's mostly from the added randomness, which is also counted upon in training, that we get something useful at the end. I mean, diffusion image generators literally start from noise.

Whether it's the same or different with humans, is up for debate of course. But the baseline is that it's a program that needs instructions to function, and so what it does, depends on how it's programmed, trained or instructed.

You can take the most advanced LLM today, plug it into a suicide drone, tell it to behave like an effective suicide drone, and that's what it will do. Again, whether it's the same or not with humans, that's a question.
 
LMAO, wouldn't surprise me in the least. I view AI as the psycho ex girlfriend that was batshit crazy and made no sense whatsoever, but was hot and let me use her!
 
You know this guy? He puts AI into robots and stuff and instructs them to act like supervillains. It's played up for drama, but there's something to it.


But what I also find interesting, is how those kids in the interview praise AI for being responsive and friendly. This is framed as a risk of manipulation and dependency on AI, which I suppose is valid, but in my view that says just as much about how hostile of a world this is, that humans don't understand each other, and we don't have time for kids so they are more likely to seek companionship online or in AIs. And that goes for adults too. Instead of reflecting on this and trying to improve things, we say AI is bad for showing us this mirror. Humanity truly has a crazy ability to not see its own problems, and blame everything else instead.
 
Did you ever notice that in all the Science Fiction written over multiple decades long before any of this became possible there is not one story where this type of thing turns out well? ;)
Probably because the ones that turned out well, were boring and predictable. I would rather see a story where the robots malfunction and go on a rampage murdering and raping human civilians instead if them coexisting and doing all the chores around the house for their human overlords, LOL
 
Probably because the ones that turned out well, were boring and predictable. I would rather see a story where the robots malfunction and go on a rampage murdering and raping human civilians instead if them coexisting and doing all the chores around the house for their human overlords, LOL
Well there has to be strife either way or the story isn't worth reading, that's true. But the number of Dystopias involving AI and Robots far exceeds the number of Utopias. On the plus side was Isaac Asimov with his Robot Series involving the three laws of Robotics and all the quirks thereof and Data of Star Trek fame. On the minus side was pretty much everybody else :LOL: (I'm exaggerating quite a bit of course)

What it comes down to though in most of them is if AI screwed things up it's because that's what humans programmed it to do. Which is likely exactly how reality is going to go now that it's here.
 
Well there has to be strife either way or the story isn't worth reading, that's true. But the number of Dystopias involving AI and Robots far exceeds the number of Utopias. On the plus side was Isaac Asimov with his Robot Series involving the three laws of Robotics and all the quirks thereof and Data of Star Trek fame. On the minus side was pretty much everybody else :LOL: (I'm exaggerating quite a bit of course)

What it comes down to though in most of them is if AI screwed things up it's because that's what humans programmed it to do. Which is likely exactly how reality is going to go now that it's here.
It's funny, i was thinking, if they came out with a realistic AI sex robot and I had one, I would have to find a way to restrain her (not turn her off, since that is not reliable) when i was sleeping because AI thinking is NOT predictable or follows human logic and the last thing i want is for her to be sitting there at night contemplating the things I was doing to her and then deciding she had enough and killing me in my sleep, LOL
 
Coming to the conversation a bit late. Pretty interesting seeing what people have said so far (though kind of expected). It seems most of you are coming from an artist perspective, which isn't surprising here. I've been a hobbyist programmer for decades, and did my own AI starting about 25 years ago. The current AI that everybody talks about is really just the LLMs. It's still just a sub-type of AI, and one of my least favorite. It's ok, it works pretty well with all the resources they've already invested in it, but that's also the main problem with it. LLMs are extremely resource-intensive. They'll probably start dying out quite quickly in ~5 years, I'd say. As soon as another AI base gets anywhere close to the usefulness of the current LLMs, but only takes a single CPU/personal computer to run the whole thing rather than an entire warehouse full of GPUs, then there's really little point in keeping the LLMs. Though don't worry too much, one of the few remaining uses would be the easy video/art editing.

So yeah, from a programmer's perspective, LLMs are just an interesting fad. They certainly have gone crazy popular throughout the economy, and have a large portion of the population who now knows "AI" because of it (though still laughable how many people think LLMs are all that defines AI). I feel sorry for any of the current college students, because pretty much all colleges are teaching is higher level LLM stuff, and the most basic they get is neural networks, which are still just a subset of AI. They're going to be pretty lost when another type comes out making a lot of their skills fairly irrelevant. Oh, and that AI bubble? Well, AI almost definitely won't go away, just LLMs. But those billions they're spending on datacenters all over the place? 90% or more of those will have no commercial value. There's nothing particularly useful to use all that computation for (well, other than blockchain stuff, but that's another matter and still a waste). So yeah, if they come up with any comparable single-computer sized AI within 5 years, that bubble is going to be painful. And it's really not that hard... hell, I could do that within 5 years and I'm just a single decently good programmer who doesn't do it as a full-time job.
 
Back
Top