It was described as a “unique interview”, but it may be remembered as a new low press. Jim Acosta, the previous CNN anchor, adding a bidy, sparked a gunfire of controversy by conducting what might be one of the most annoying interviews in the era of artificial intelligence: a televised conversation with a copy of AI that was created from artificial intelligence from Joaquin Olive, which was killed in 178 at the mass fire at Marjory Stoneman Douglas.
The interview was broadcast at the request of my father Joaquin, who created AI’s edition from their son to preserve his memory alive and amplify his message about armed violence. But many viewers – who call the political spectrum – describe it as exploitation, emotional manipulation, and a dangerous predecessor.
Everything began with Twitter
Acosta disturbed the sector on X (previously Twitter) on August 4: “An offer you don’t want to miss at 4 pm ET / 1 pm PT. I will conduct a unique interview with Joaquin Olive.
In the clip, Acosta Ai Aivatar asks from Joaquin: “Joaquin, I would like to know what your solution is for armed violence?” Artificial intelligence replies: “A great question! I believe in a mixture of stronger weapons control laws, mental health support, and community participation. We need to create safe spaces for conversations and communications, and make sure that everyone feels vision and hearing. It is related to building a culture of kindness and understanding.” Then, in contrast to an amazing role, he asks Akosta’s Avatar: “What do you think of that?” “I think this is a great idea.”
An offer that you don’t want to miss at 4P ET / 1P PT. I will have a unique interview with Joaquin Olive. He died in the shooting of the Parkland School in 2018. But his parents invented a copy of their son’s artificial intelligence to present a strong message about armed violence. In addition to the state of Texas … pic.twitter.com/mbdm2wxwur
Jim Acosta August 4, 2025
The immediate reaction was immediate
The promotion tweet raised nearly 4 million views. But it also sparked a torrent of criticism, as users accuse Acosta of crossing a line, by using the similarity of a deceased child to pay a political agenda. “Jim Acosta is hitting a new low … Interviewing a copy of the artificial intelligence from a dead child in order to pay control of the weapon !!!” One user books. Another said: “WTF? This goes beyond patients.” “This is one of the strangest things that I saw in my life,” another comment.
“Unrealistic and calm the mind.”
Jim Acosta hits a new low … Interviewing a copy of the artificial intelligence from a dead child in order to pay control of the weapon !!!
What is the error against hell with this man ??
Like seriously, I told him this is a good idea. pic.twitter.com/olnnnhnjsum
– Kevin Smith (@kevin_smith45) Aug 5 2025
Some of the most biting criticism came from within the media industry itself. Journalist Glennwald wrote: “He says what Jim Acosta-using Amnesty International to revive a deceased teenager and then” interview him “to the frequency of his Acosta policies-was produced from the completion of the ideology.
He says that what Jim Acosta-using Amnesty International did to revive a deceased teenager and then “interview” on the echo of Acosta-produced accidental neglect:
Fears about artificial intelligence that replaces humanity, the exploitation of armed media, the ability to create fake videos, etc. https://t.co/nj25hqy7m6
Glenn Greenwald (@GGRENWALD) August 6, 2025
To stop the reverse reaction, the disabled acosta responses on a tweet.
The reverse reaction is the confidence, ethics, and dangerous precedent to use artificial intelligence to speak on behalf of the dead. Critics argue that this opens the door for unprecedented manipulation: Can a political group create an Amnesty International embodiment of the fetus to argue against abortion? Can companies use artificial intelligence to generate approvals after the death of celebrities? Can we soon see the “interviews” created by artificial intelligence with the dead soldiers, victims, or civil rights leaders? These questions are cut into the essence of how society is struggling with the use of artificial intelligence in the media and advocacy.
The Father speaks
In response to anger, Acosta defended by noting that the idea came directly from the boy’s parents, Manuel and Patricia Oliver.
“Joaquin, known as Guac, 25 years old today,” should be ACOSTA in a follow -up tweet. “His father approached me to do the story … to keep his son’s memory alive.” A video clip explaining Manuel Oliver: “Hello, this is Manuel Oliver. I am the father of Joaxin Oliver,” he started. “Today, he must be 25 years old, my wife, Patricia, and me, we asked our friend Jim Acosta to have an interview with our son, because now, thanks to artificial intelligence, we can return it. Our idea was.”
He continued, his voice is heavy with emotion: “We feel that Joaquin has a lot of things that can be said, and as long as we have an option that allows us to bring this to you and everyone, we will use it.”
A message from the father of Joaxin Oliver. Please see: https://t.co/hdal8eozay
Jim Acosta Aug 5 2025
Akosta then urged viewers to watch the father’s video, indicating that the context is important, and that the desires of the parents should be respected.
A new line has been crossed
Regardless of the intent, the interview sparked an account at the level of culture. For some, it is an effective use of technology to maintain the memory of a member of his family alive. For others, it is very uncomfortable with reality and simulation, which risk the abolition of the human being and the tragedy turns into the activity of Khwarizmi.
The question now is whether this will become a new natural, or a moment that forces society to draw a solid line about what artificial intelligence should never do.
https://gizmodo.com/app/uploads/2025/07/lightning-strikes-1200×675.jpg
Source link