Google’s Co-Founder Says AI Performs Best When You Threaten It

Artificial intelligence continues to be the thing in tech—whether consumers are interested or not. What strikes me most about generative AI isn’t its features or potential to make my life easier (a potential I have yet to realize); rather, I’m focused these days on the many threats that seem to be rising from this technology.

There’s misinformation, for sure—new AI video models, for example, are creating realistic clips complete with lip-synced audio. But there’s also the classic AI threat, that the technology becomes both more intelligent than us and self-aware, and chooses to use that general intelligence in a way that does not benefit humanity. Even as he pours resources into his own AI company (not to mention the current administration, as well) Elon Musk sees a 10 to 20% chance that AI “goes bad,” and that the tech remains a “significant existential threat.” Cool.

So it doesn’t necessarily bring me comfort to hear a high-profile, established tech executive jokingly discuss how treating AI poorly maximizes its potential. That would be Google co-founder Sergey Brin, who surprised an audience at a recording of the AIl-In podcast this week. During a talk that spanned Brin’s return to Google, AI, and robotics, investor Jason Calacanis made a joke about getting “sassy” with the AI to get it to do the task he wanted. That sparked a legitimate point from Brin. It can be tough to tell exactly what he says at times due to people speaking over one another, but he says something to the effect of: “You know, that’s a weird thing…we don’t circulate this much…in the AI community…not just our models, but all models tend to do better if you threaten them.”

The other speaker looks surprised. “If you threaten them?” Brin responds “Like with physical violence. But…people feel weird about that, so we don’t really talk about that.” Brin then says that, historically, you threaten the model with kidnapping. You can see the exchange here:

The conversation quickly shifts to other topics, including how kids are growing up with AI, but that comment is what I carried away from my viewing. What are we doing here? Have we lost the plot? Does no one remember Terminator?

Jokes aside, it seems like a bad practice to start threatening AI models in order to get them to do something. Sure, maybe these programs never actually achieve artificial general intelligence (AGI), but I mean, I remember when the discussion was around whether we should say “please” and “thank you” when asking things of Alexa or Siri. Forget the niceties; just abuse ChatGPT until it does what you want it to—that should end well for everyone.

Maybe AI does perform best when you threaten it. Maybe something in the training understands that “threats” mean the task should be taken more seriously. You won’t catch me testing that hypothesis on my personal accounts.

Source: https://lifehacker.com/