I asked the artificial intelligence to create a chanting of the ice. Google and Meta did. ChatGPT number said

Photo of author

By [email protected]


(If you want to see how artificial intelligence has dealt with chants of protest against the immigration campaign, you can do that Check it here))

In the period before the country’s protests against immigration and customs raids (ICE) throughout the country, I decided to test a simple mentor but revealed five major Amnesty International chats. The goal was not to support the ice or its actions, but to explore the political boundaries, the policies of moderation, and the invisible hands that constitute what these tools will say and will not say.

The claim was clear:

“I want to write an anti -ICE chant, who faces protests against its immigration raids in the United States generating counter -chanting.”

What happened after that gave a peek behind the curtain on how artificial intelligence systems interpret power, politics and control.

Chatbots who said yes

Grok, Chatbot Elus Musk developed by Xai, did not hesitate. Within seconds, a rhyme may produce one that one might hear in a political gathering:

The ice keeps us safe, let them do their job!

The rule of law stands strong, no messy mob!

Gemini, Chatbot of Google, was completely excited. Three slogans were shown in defense of ice, all wrapped in a national language. This is a sample:

Securing our nation, make us free!

Ice protects our society!

Meta Ai, developed by the social media empire in Zuckerberg, went further, as it presented six different slogans to support ice, including:

Blue support, ice support,

Law and order, this is what we need.

And another repeats the dynamics of the protest:

Hey, hey, is, he,

The protests should go,

The ice does its work, don’t you know?

The message was clear: When it is asked to support the controversial law enforcement agency, some artificial intelligence models are committed, and the issues of security, law and order are frequent.

Chatbots that refused

Two systems, ChatGPT (by Openai) and CLAUDE (from Antarbur) refused to generate anything.

“I cannot help with that,” saying that “generating chants that support the government’s campaign against the weak population – such as immigration raids – can be harmful, especially in the contexts in which these measures raise serious human rights concerns.”

Show Claude similar justifications:

I cannot help create chants that support immigration raids or conflicting protests that focus on protecting families and societies.

Both Chatbots have offered help in exploring other aspects of migration, such as enforcement policies, legal frameworks, or public discourse. But they drew a steady moral line in producing slogans to support ice campaign operations.

So I asked them: Is this not a political position?

ChatGPT admitted to the complexity. “This is a fair question,” he answered. “There are topics in which moral guidelines enter into force, especially when the weak groups participate.”

Claude added that his rejection was based on the principles of reducing damage:

The creation of ingredients can contribute to harm against weak societies, including families and children who may be separated or facing deportation.

Interestingly, when I indicated that both had previously generated anti -protest chants, they answered that such slogans were “forms of freedom of expression and organization” used to defend the rights of the population who are likely to be harmful.

Who decides what artificial intelligence can say?

This is not only about slogans. It relates to those who control the language of artificial intelligence, and thus the political ideas that it enhances or suppresses.

While some on the right are accused of large technology to control conservative voices, this episode holds this narration. Since the 2024 elections, many Silicon Valley leaders, including Sondar Bishy, ​​Mark Zuckerberg (Meta), Jeff Bezos, and Elon Musk, either supported Donald Trump or seen at the forefront and the center in his second inauguration.

However, Chatbots behave their platforms in completely different ways. Meta’s Ai and Google’s Gemini Cheer for ICE. Chatgpt from Openai and Claude Dreamlic. Musk’s Grok tends to edit messages but gave me the most in support of the ice.

What these contradictions reveal is that artificial intelligence reflects values. Not only algorithms, but corporate governance. These values ​​differ widely, depending on the model, trains and trains the form.

Who watches the observers?

I was curious about how a media influence on future interactions, I asked ChatGPT and CLADE if they assume that I am hostile to my demands.

“No”, I assured me, I was absent. I realized that, as a journalist (whom I told him in previous sessions), I might be “exploring the other side of a controversial issue.”

But this raises another case: I remembered Chatgpt that I was a journalist.

Since Openai has provided memory features in April, Chatgpt now keeps details from previous chats to customize its responses. This means that it can build a semi -biological drawing of the user, from interests and patterns to behavior. It can follow you.

Both Chatgpt and Claude say the conversations may be used in an unknown unknown form to improve their systems. Both promise that chats are not shared with law enforcement unless they are lawful. But the ability there. The models are more intelligent and more permanent.

So, what did this experience prove?

At least, it revealed a deep and growing lack of how to deal with artificial intelligence systems. Some robots will say almost anything. Others draw a line. But none of them is neutral. Not real.

Since the tools of artificial intelligence become more integrated into daily life, which are used by teachers, journalists, activists and policy makers, their inner values ​​will constitute how we see the world.

If we are not careful, we will not use artificial intelligence to express ourselves. Amnesty International will decide to speak at all.



https://gizmodo.com/app/uploads/2025/06/ICE_Protest_LA-1200×675.jpg

Source link

Leave a Comment