I'm not anti-AI, and I think that is the way systems are going to go simply because of cost. But I still think there is value in knowing what the underlying data is it based it's decisions on, and there is none of that. That's why books always had fucking footnotes and bibliographies, so you could see where they are getting their data from.The guy is consistent in his belief, I just thought this video is a good example because generally I see it the same way, at least the overall trend. I'm just not a fan of cloud stuff, I'd rather have everything local and independent. With cloud as backup, but that keeps getting more risky.
But I've definitely gone more into talking to AI about stuff then reading it up, especially for science topics. At least for the initial reference. If it's important enough for me to check, I can always follow the links to original sources.
That's true of everything. You can read a book that has wrong information, you may have remember something incorrectly, or misunderstood something and end up believing something incorrect.
Actually, AI has an advantage here. Because it siphons up so much info, it can a) iron out or dismiss anomalies, or b) spot the discrepancies and tell you about them.
I do like that when talking to an LLM, it's inclined to often tell me multiple perspectives, not just a single one as humans often tend to do.
But, training an AI is essentially akin to lossy information compression. It is inherently nor necessarily accurate. But I don't hold it against it. If I need to store information accurately, that's what old-fashioned storage systems are for.
Although I wonder how that's gonna develop. Will we need to store pictures in image formats, if AI will be able recreate them from memory? And is it much different in principle, than using all the image tools and filters to make photos nicer?
How about video? That takes a ton of data. But if you have locally stored visual information about the actors and all the stuff, maybe all we need is some simple descriptions how things move about, and stream/store that.
Well, so are humans. Not really sure why people expect AI to be 'perfect', I think it's pretty cool if it can decrease the human error rate, and that keeps improving. See self-driving cars. It can never be perfect, but nothing is.
Yea, taking an output from an AI always at face value is risky. But that's always the case. In the recent years/decades, it seems we're more and more conditioned to just take anything in without thinking about it: news, religion, science, political speech, everything is presented as facts not to be questioned. Then drop the authoritative-sounding AI into the mix. Well of course then, that people don't want to think about its outputs.
And AI has offered none of that to date, just a summary. fuck that.