Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
Make it unprofitable for the companies peddling it, by passing laws that curtail its use, by suing them for copyright infringement, by social shaming and shitting on AI generated anything on social media and in person and by voting with your money to avoid anything that is related to it
Other people have some really good responses in here.
I’m going to echo that AI is highlighting the problems of capitalism. The ownership class wants to fire a bunch of people and replace them with AI, and keep all that profit for themselves. Not good.
Nobody talks how it highlights the success of capitalism either.
I live in SEA and AI is incredibly powerful here giving opportunity for anyone to learn. The net positive of this is incredible even if you think that copyright is good and intellectual property needs government protection. It’s just that lop sided of an argument.
I think western social media is spoiled and angry and the wrong thing but fighting these people is entirely pointless because you can’t reason someone out of a position they didn’t reason themselves into. Big tech == bad, blah blah blah.
You don’t need AI for people to learn. I’m not sure what’s left of your point without that assertion.
You’re showing your ignorance if you think the whole world has access to fit education. And I say fit because there’s a huge difference learning from books made for Americans and AI tailored experiences just for you. The difference is insane and anyone who doesn’t understand that should really go out more and I’ll leave it at that.
Just the amount of frictionless that AI removes makes learning so much more accessible for huge percentage of population. I’m not even kidding, as an educator, LLM is the best invention since the internet and this will be very apparent in 10 years, you can quote me on this.
You shouldn’t trust anything the LLM tells you though, because it’s a guessing machine. It is not credible. Maybe if you’re just using it for translation into your native language? I’m not sure if it’s good at that.
If you have access to the internet, there are many resources available that are more credible. Many of them free.
You shouldn’t trust anything the LLM tells you though, because it’s a guessing machine
You trust tons of other uncertain probability-based systems though. Like the weather forecast, we all trust that, even though it ‘guesses’ the future weather with some other math
That’s really not the same thing at all.
For one, no one knows what the weather will be like tomorrow. We have sophisticated models that do their best. We know the capital of New Jersey. We don’t need a guessing machine to tell us that.
For things that require a definite, correct answer, an LLM just isn’t the best tool for it. However if the task is something with many correct answers, or no correct answer, like for instance writing computer code (if its rigorously checked against its actually not that bad) or for analyzing vast amounts of text quickly, then you could make the argument that its the right tool for the job.
Our current ‘AI’ is not AI. It is not.
It is a corporate entity to shirk labor costs and lie to the public.
It is an algorithm designed to lie and the shills who made it are soulless liars, too.
It only exists for corporations and people to cut corners and think they did it right because of the lies.
And again, it is NOT artificial intelligence by the standard I hold to myself.
And it pisses me off to no fucking end.
I personally would love an AI personal assistant that wasn’t tied to a corporation listening to every fkin thing I say or do. I would absolutely love it.
I’m a huge Sci-Fi fan, so sure I fear it to a degree. But, if I’m being honest, AI would be amazing if it could analyze how I learned math wrong as a kid and provide ways to fix it. It would be amazing if it could help me routinely create schedules for exercise and food and grocery lists with steps to cook and how all of those combine to effect my body. It would be fantastic if it could point me to novels and have a critical debate about the inner works with a setting of being a contrarian or not so I can seek to deeply understand the novels.
It sounds like what our current state of AI has right? No. The current state is a lying machine. It cannot have critical thought. Sure, it can give me a schedule of food/exercise, but it might tell me I need to lift 400lbs and eat a thousand turkeys to meet a goal of being 0.02grams heavy. It might tell me 5+7 equals 547,032.
It doesn’t know what the fuck it’s talking about!
Like, ultimately, I want a machine friend who pushes me to better myself and helps me understand my own shortcomings.
I don’t want a lying brick bullshit machine that gives me all the answers but they are all wrong because it’s just a guesswork framework full of ‘whats the next best word?’
Edit: and don’t even get me fucking started on the shady practices of stealing art. Those bastards trained it on people’s hard work and are selling it as their own. And it can’t even do it right, yet people are still buying it and using it at every turn. I don’t want to see another shitty doodle with 8 fingers and overly contrasted bullshit in an ad or in a video game. I don’t want to ever hear that fucking computer voice on YouTube again. I stopped using shortform videos because of how fucking annoying that voice is. It’s low effort nonsense and infuriates the hell out of me.
Destroy capitalism. That’s the issue here. All AI fears stem from that.
(Ignoring all the stolen work to train the models for a minute)
It’s got its uses and potential, things like translations, writing prompts, or a research tool.
But all the products that force it in places that clearly do not need it and solving problems could be solved by two or three steps of logic.
The failed attempts at replacing jobs, screen resumes or monitoring employees is terrible.
Lastly the AI relationships are not good.
The most popular models used online need to include citations for everything. It can be used to automate some white collar/knowledge work but needs to be scrutinized heavily by independent thinkers when using it to try to predict trend and future events.
As always schools need to be better at teaching critical thinking, epistemology, emotional intelligence way earlier than we currently do and AI shows that rote subject matter is a dated way to learn.
When artists create art, there should be some standardized seal, signature, or verification that the artist did not use AI or used it only supplementally on the side. This would work on the honor system and just constitute a scandal if the artist is eventually outed as having faked their craft. (Think finding out the handmade furniture you bought was actually made in a Vietnamese factory. The seller should merely have their reputation tarnished.)
Overall I see AI as the next step in search engine synthesis, info just needs to be properly credited to the original researchers and verified against other sources by the user. No different than Google or Wikipedia.
Most importantly, I wish countries would start giving a damn about the extreme power consumption caused by AI and regulate the hell out of it. Why do we need to lower our monitors refresh rate while there is a ton of energy used by useless AI agents instead that we should get rid of?
Ban it until the hard problem of consciousness is solved.
Shutting these "AI"s down. The once out for the public dont help anyone. They do more damage then they are worth.
Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.
Unfortunately, right now the world is providing the greatest level of research for AI.
I feel like the only thing that the world universally bans is nuclear weapons. AI would have to become so dangerous that the world decides to leave it in the lab, but you can easily make an LLM at home. You can’t just make nuclear power in your room.
How do you get your wish?
If I knew how to grant my wish, it’d be less of a wish and more of a quest. Sadly, I don’t think there’s a way to give patience to the world.
Yeah I don’t think our society is in a position mentally to have patience. We’ve trained our brains to demand a fast-paced variety of gratification at all costs.
We were already wired for it, but we didn’t have access to the thing’s we have now. It takes a lot of wealth to ride the hedonic treadmill, but our societies have reached a baseline wealth where it has become much more achievable to ride it almost all the time.
I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That’s what I want.
People haven’t ”thought for themselves” since the printing press was invented. You gotta be more specific than that.
Ah, yes, the 14th century. That renowned period of independent critical thought and mainstream creativity. All downhill from there, I tell you.
Independent thought? All relevant thought is highly dependent of other people and their thoughts.
That’s exactly why I bring this up. Having systems that teach people to think in a similar way enable us to build complex stuff and have a modern society.
That’s why it’s really weird to hear this ”people should think for themselves” criticism of AI. It’s a similar justification to antivaxxers saying you ”should do your own research”.
Surely there are better reasons to oppose AI?
The usage of “independent thought” has never been “independent of all outside influence”, it has simply meant going through the process of reasoning–thinking through a chain of logic–instead of accepting and regurgitating the conclusions of others without any of one’s own reasoning. It’s a similar lay meaning as being an independent adult. We all rely on others in some way, but an independent adult can usually accomplish activities of daily living through their own actions.
Yeah but that’s not what we are expecting people to do.
In our extremely complicated world, most thinking relies on trusting sources. You can’t independently study and derive most things.
Otherwise everybody should do their own research about vaccines. But the reasonable thing is to trust a lot of other, more knowledgeable people.
My comment doesn’t suggest people have to run their own research study or develop their own treatise on every topic. It suggests people have make a conscious choice, preferably with reasonable judgment, about which sources to trust and to develop a lay understanding of the argument or conclusion they’re repeating. Otherwise you end up with people on the left and right reflexively saying “communism bad” or “capitalism bad” because their social media environment repeats it a lot, but they’d be hard pressed to give even a loosly representative definition of either.
This has very little to do with the criticism given by the first commenter. And you can use AI and do this, they are not in any way exclusive.