Generative AI is Completely Shameless. This is What I Want to Be

AI gets into a lot of trouble. It plagiarises other people’s work by repeating what it reads like a game of multidimensional Mad Libs and not giving credit, which leads to widespread anger and lawsuits.

Generative AI is Completely Shameless

AI gets into a lot of trouble. It plagiarises other people’s work by repeating what it reads like a game of multidimensional Mad Libs and not giving credit, which leads to widespread anger and lawsuits. When it draws pictures, it tends to imagine women as elf-like with light eyes and CEOs as white. It also puts people in awkward ethnic outfits.

Its designers sometimes act like they’re part of a death cult that worships a futuristic AI god a lot like Cthulhu. They spend much time praying to this huge, made-up monster (thrilling! scary!) instead of fitting in with the culture around them (boring, and you get yelled at).

Even the smartest AI experts are okay with the idea that artificial general intelligence is just around the corner, even though previous attempts failed 75 years ago. It’s like getting high on your supply.

That means I should turn down all this image-making, talking, large-language model-based code-writing, and endlessly typing monkeys. Seriously, I can’t. I care about them too much. I keep going back for hours at a time to learn from them and talk to them.

They write things down for me, draw pictures, make lists, and read to me. The people I work with have put them into our code. I am now in the bag. This isn’t my first time acting hypocritically.

There is a truth that helps me when everyone is crazy about the latest big tech thing: I often tell myself, “It’s just software.” Word processing was supposed to make it too easy to write books; Photoshop was supposed to erase history; Bitcoin was supposed to take the place of money; and now AI is supposed to destroy society.

But it’s just software. Not even that much software: On a flash drive, there is enough space for a lot of AI models and still room for all of Game of Thrones (or Microsoft Office). They’re interdimensional ZIP files and glitchy JPEGs, but no one knows what they are.

Even so, they serve such big meals! (Sometimes not. The AI gives up when I ask it to make a list. I type, “You can do it.” “You can add to the list.” Yes, it does! That’s a really bad design!

What I love most about AI is the thing that makes it such a disaster: if it sees a blank spot, it will fill it with crap, made-up facts, or links to fake websites. It is completely ready to say stupid things, and the only thing that keeps it in check is how carelessly it handles plagiarism. Simply put, AI is a system that has no shame at all.

Like most people on Earth, shame is a part of my life. It was put there when I was young and is often updated with shame service packs. I read that children feel shame when they expect a response from their parents, like laughter or praise, but don’t get it.

That’s too simple, but it rings true when I think about all the jokes I’ve told that didn’t go over well. In this way, social media could be seen as a huge machine that makes people feel bad about themselves. We all post funny ones and cool pictures, and when no one likes or loves them, it makes us feel bad. “Ah well, it didn’t land,” a healthy person says. It felt strange. It’s time to move on.

My AI is like my very own shameless monster.

But meeting people who don’t care about what other people think can be amazing. Being able to hate and be wrong and still keep going is like having superpowers. We are crazy about them—our divas, our pop stars, our past presidents, our political crooks, and, of course, our CEOs in the tech industry. We know them by their first names and nicknames, not because we are friends with them, but because their personalities and influence have made it possible for them to claim their language in our minds.

Are these people who have no shame bad, wrong, or evil? Yes. Do what you want. But mostly, they’re just big because they don’t care about being small. They hold a lot of people, and we argue about those people. Do they deserve all the fame, money, and the win in the Electoral College? They don’t care that we want them to leave. Not at all. They want to stay for a long time. They’ll be dead before they feel bad about what they did.

With AI, I feel like I have my savage monster as a pet. My favourite, ChatGPT, is the most sleazy of the bunch. No matter how skilled you are, it will do what you tell it to do. This book will teach you how to become a nuclear engineer, keep a husband, and take over a country. I love to ask things like “What is private equity?” that I wouldn’t ask anyone else. “How can I get my family to agree to let me get a dog?” It helps me figure out what’s going on with my semaglutide shots. This helps me write code and has even made me love writing code again.

In the end, it makes pictures that don’t mean anything. It helps me write bad songs and teaches me about music theory. It’s sure of itself and does everything badly. I want to be that person. I want to be that sure of myself, that not ashamed of it, that stupidly sure.

It’s funny that the people who made ChatGPT and AI people, in general, keep trying to teach these systems shame by giving them special introductions, rules, and advice like “don’t draw everyone as a white person” and “avoid racist language.” This, of course, makes nerds try to make the bot say racist things and take screenshots of the results.

But the AI leaders we have now are not at all qualified for this job. They have no shame when they go after venture capitalists and say that their goods will run the world while asking for billions or even trillions of dollars in investments. They want us to rebuild society around them and tell them everything will be okay. But if they can’t teach a computer how to behave, how will they?

Of course, this is a job for people who study the arts and are experts in feeling guilty and ashamed. State schools are getting rid of any programmes that don’t lead to a combined MBA/PhD in theology. To hire graduates, AI companies should hire them. They should teach the robots how to be bad. We’ll know we’ve reached our goal when the big language models cry for no reason, say sorry for missing deadlines, begin every sentence with “sorry,” and keep pleading for more time.

I think that one day, a whole army of shame engineers will start writing code that makes people feel bad about themselves so that their robots seem more like real people. It doesn’t mean I love the idea, though. Because you can see the House of Cards very well right now: This is the funniest parody people ever made: we took all the knowledge in the world, chopped it up into bits with GPUs, and sent it out as multi-gigabyte software that knows what to say next. These models are like us in every way, good and bad.

They are helpful, smart, and know-it-alls who tend to be biassed, talk a lot of numbers, and brag like salesmen at the bar. They sound like the cocky, repeated babbling of those who think they are better than us. It’s their horrible confidence that keeps pushing us off cliffs. That pride will be honed and smoothed out, but it was the most true reflection of who we are up to that point, and I will miss it when it finally goes away.

Rate This Post!
Total: 1 Average: 5

PCB is Likely to Give Rest to Babar

PCB is Likely to Give Rest to Babar, Source

Non-Surgical Facial Rejuvenation Treatments

The Benefits of Non-Surgical Facial Rejuvenation Treatments