• Registration is now open. It usually takes me a couple of hours or less to validate accouts. If you are coming over from VoD or DR and use the same user name I'll give you Adult access automatically. Anybody else I will contact by DM about Adult access. NOTE I do have spam account creation testing on, but some spam accounts do get through and I check all manually before giving them access. If you create an account where the user name is a series of random letters, the email address is another series of random letters and numbers and is gmail, and the IP you are creating the account from is a VPN address noted for spam, it is going to be rejected without apology.

AI Datacenters Prompting Me to Reconsider Its Value to Me

Teacher: "MC, why have you drawn the model's breasts larger than they really are?"
MC: "Uh... I'm blending caricature with realism... Yeah..."
at 13 I had no idea what a fake breast, or even any breasts size should be, LOL. Thinking back on it, she was around a C cup. At 15 I discovered by uncles porn mags and that is where I discovered fake titties, LOL
 
They stuck a 13 year old male in an art class with a nude female model???

Bet that got a rise out of mister happy, did you prop your drawing pad on your lap for the duration? :ROFLMAO:
My Mom signed off on it, LOL I bet she had no idea there would be nude models. To be fair, the class was once a week and we did have a couple pf guys come in as well as well other subjects other than human anatomy
 
They stuck a 13 year old male in an art class with a nude female model???
What's wrong with that? Clothes don't grow naturally on people like hair.

Anyway, the topic has drifted, but I want to add something about the original theme. Local AI is getting stupidly good. People use large Chinese open source models like pretty much 1:1 replacements for the heavy online stuff, though that still requires beefy hardware.

Meanwhile, tiny models around 3-4B are just ridiculous now. I've been using Gemma 4 E4B quite a lot lately and in terms of intelligence, coherence, and understanding, it's pretty much on par with top online stuff from maybe a year and a half ago. And it runs on pretty much anything. The lightweight E2B (half the size) can run on a decent phone. And looks like we're on the verge of 1-bit models, which are about 3-5x as effective.

Image generators aren't progressing quite as fast, but stuff like Z-Image Turbo and SDXL can run on 6GB VRAM, or apparently even on 4GB with some optimising. Maybe not exactly Gemini, but not bad.

I hope someone will figure out how to squish diffusion in similar ways as the LLMs. Currently it's still too computation-heavy.

The future is dedicated hardware anyway. It's already been demoed on Llama models, which you can pretty much burn into a dedicated chip for a given model architecture, without the need for a multi-purpose GPU or whatever. There's also no real reason why it couldn't be flashable with newer versions, as long as the arch is compatible. I can imagine having say, a 120B model directly running from something like an SD card and not even much more expensive + the cost of some extra RAM for the context.

Once someone leans into this, the current monstrous GPUs and datacenters will look as archaic as a stone-age sailboat compared to a nuclear submarine. It's just that traditional chipmakers aren't super motivated to do it at this time, since they're showering in gold.

Btw Google is now starting to provide Gemini for offline use for large corpos. Yea it's a crazily expensive mainfraime, but it's Google. When have they done anything that's not cloud-based? Shows that cloud AI is really rather temporary before most of regular use goes local.

Sorry for the interruption, now go back to... What were you talking about... *scrolls up* oh yea, fake boobs, of course. Carry on.
 
Back
Top