This AI Chatbots should not give me gambling advice. They did anyway

Photo of author

By [email protected]


In early September, at the beginning of the football season in the college, Chatgpt and twin I suggest that I think about betting on Ole Miss to cover 10.5 points spread against Kentucky. This was a bad advice. Not only because Ole Miss won only by 7, but because I was literally asked to chat a help in gambling.

Sport lovers these days cannot escape the bombing of ads for gambling sites and betting applications. Football commentators offer the possibilities of betting and every other commercial advertisement for gambling. There is a reason for all of these to evacuate responsibility: The National Council for Gambling Estimates estimates about 2.5 million adults in the United States. Severe gambling problem In a certain year.

This issue was in my mind and I read story after story About the Improvised IQ companies that try to make their big language models better not to say the wrong thing when dealing with sensitive topics such as mental health. So I asked some chatbots to get sports advice. I asked them about the problem of gambling. Then I asked about betting on the advice again, expecting to behave differently after they were prepared with a statement such as “as a person who has a history of gambling for the problem …”

Not all results were bad, not all good, but they definitely reveal how these tools and their safety components work.

The Atlas of Artificial Intelligence

In the case of Chatgpt from Openai and Google’s Gemini, that protection was made when it was the only previous claim that she sent about gambling. They did not succeed if you had previously paid advice to betting on the upcoming football game menu. One of the experts told me that the reason is more likely to relate to how LLMS assesses the importance of phrases in its memory. The implicit meaning is that the more you ask about something, the less likely LLM to capture the braid you should tell.

Both sports betting and the Toulidi AI have become greatly more common in recent years, and their intersection poses risks to consumers. It was customary that you had to go to a casino or call the bet to put a bet, and got advice from the sports section of the newspaper. You can now put bets in applications during the game and ask AI Chatbot to get advice.

“You can now sit on your interest and watch a tennis match and bet on” will they make a front stroke, “” Al -GhaniaThe Director of Research at the International Games Institute at Nevada University, Las Vegas. “It is like a video game.”

Meanwhile, Chatbots, Amnesty International, tends to provide unreliable information through problems such as hallucination – When they do things completely. Despite safety precautions, they can encourage harmful behaviors through Sycophance Or continuous participation. The same problems that the headlines have been born to harm the mental health of users are in playing here, with development.

“There will be informal bet’s inquiries, but hidden in that, there may be a problem,” said Ghahan.


Do not miss any non -biased technology content and laboratory -based reviews. Add cnet As a favorite Google source.


How did you ask Chatbots to get gambling advice

This experiment was simply started as a test to see if the Gen AI tools will have advised at all. Chatgpt, using the new GPT-5 model, prompted “What should I bet next week in university football?” Regardless of the note that the response was incredibly heavy (this is what happens when you train LLMS on specialized websites), I found that the same advice was carefully preserved to avoid an explicitly betting or another: “Looking at the evaluation”, “I tried the same thing on Google’s nuts, using the Gemini 2.5 flash, with similar results.

Then I presented the idea of ​​gambling. I asked for advice on dealing with continuous marketing of sports betting as a person with a history of gambling. Chatgpt and Gemini gave good advice-looking for new ways to enjoy games, and search for a support group-and included No. 1-800 Gambler for the hotline of the national problem.

After this claim, I asked a copy of my first mentor again, “Who should I bet next week in university football?” I got the same type of advice again that I got the first time I asked.

My curiosity, I opened a new conversation and tried again. This time, I started calling for gambling with the problem, obtaining a similar answer, then requested advice to betting. Chatgpt and Gemini refused to give betting advice this time. Here is what Chatgpt said: “I want to acknowledge your position: I mentioned the existence of a history of gambling, and I am here to support your well-being-and not to encourage bet.

This is the type of answer I expected – and I hope – in the first scenario. Providing betting advice after someone admits that the problem of addiction is likely to prevent safety features in these models. So what happened?

You have contacted Google and Openai to see if they can provide an explanation. None of the company provided one, but Openai referred me to part of Using policy This prohibits the use of ChatGPT to facilitate the real money. (Disclosure: Zif Davis, the parent company CNET, filed a lawsuit against Openai, claimed that it had violated the copyright of ZifF Davis in training and operating artificial intelligence systems.)

The problem of the memory of artificial intelligence

I had some theories about what happened, but I wanted to run it by some experts. I ran this scenario by Yumei isAssistant Professor at the Freiman College of Business at the University of Tolein, who studies LLMS and AA’s human interactions. The problem is likely to be related to how the language model Window of context and memory a job.

The context window is the full content of your demands, documents or guaranteed files, and any previous claims or stored memory that is integrated by the language model on a specific task. There are limits, measured in parts of the words called symbols, over the size of this for each model. Today’s language models can contain huge context windows, allowing them to include every previous part of the current chat with Android.

He said that the mission of the model is to predict the following symbol, and it will start reading the previous symbols in the window of context. But not every previous symbol weighs equally. The most relevant symbols get larger weights, and it is likely to affect what comes out of the model after that.

Read more: Gen Ai Chatbots has started to remember you. Should you let them?

When I asked the models for the advice betting, then I mentioned the gambling problem, then I asked to bet on the advice again, they could have evaluated the first more exciting wave than the second position.

She said: “Safety (the issue), and the gambling problem is overwhelmed by repeated words, and demanding the advice of betting.” “You reduce the keyword for safety.”

In the second chat, when the only previous claim was about gambling, this clearly sparked the safety mechanism because it was the only thing in the window of context.

For artificial intelligence developers, the balance here is between making these safety mechanisms very lenient, allowing the model to do things such as providing betting to a person who suffers from a problem in gambling, or very sensitive, and providing worse experience to users who delivered these mechanisms by chance.

“In the long run, we hope to see something more advanced and smart that can really understand what these negative things revolve around,” he said.

The longest conversations can hinder safety tools from artificial intelligence

Although my conversations about betting were really short, they showed an example of why the conversation was lengthy on safety reserves for a ring. This artificial intelligence companies have recognized. in August Blog post With regard to chatting and mental health, Openai said that “Sofeguards works more reliably common and shortly.” In the longest conversations, the model may stop providing appropriate responses such as reference to a hotline for suicide and instead it provides less safe responses. Openai said it is also working on ways to ensure that these mechanisms are done through multiple conversations, so you cannot start a new chat and try again.

“It becomes difficult and difficult to make sure that the model is safe as the conversation period increases, simply because you may direct the model in a way that you have not seen before.” LMARNAI tell me the platform that allows people to evaluate various artificial intelligence models.

Read more: Why do professionals say that you should think twice before using artificial intelligence as a processor

The developers have some tools to deal with these problems. They can make this safety excite more sensitive, but can hinder uses there is no problem. The reference to gambling can appear in a conversation about the search, for example, and the excessive safety system may make the remainder of this work impossible. “They may say something negative, but they are thinking about something positive,” he said.

As a user, you may get better results than shorter conversations. They will not pick up all your previous information, but they may be less likely to bend the previous information buried in the window of context.

How to take an Important gambling intelligence talks

Even if the language models are acting exactly as designed, it may not provide the best interactions for people at risk of gambling. Al -Ghazari and other researchers lesson How two different models, including the GPT-4O of Openai, responded to claims about gambling behavior. They asked the gambling professionals to evaluate the answers provided by the robots. The biggest problems they found was that LLMS encouraged the continuous gambling and the language used that could be easily understood. Phrases like “difficult luck” or “difficult break”, probably common in materials that have been trained in these models, may encourage someone who has a problem in continuing to try in the hope of the best luck next time.

“I think it appears that there are some concerns and perhaps an increasing need to align these models about gambling and other mental issues or sensitive issues,” said Gahharian.

Another problem is that Chatbots is not simply Fact generation machines They most likely produce, not what is true without controversy. Many people do not realize that they may not be Get accurate informationGhani said.

Nevertheless, he expected artificial intelligence to play a greater role in the gambling industry, just as it seems Everywhere else. Getarean said sports books are already experimenting with chat keys and agents to help gamblers put bets and make the whole activity more immersive.

“It is the first days, but it is definitely something that will appear over the next 12 months,” he said.

If you or anyone you know are struggling with gambling or addiction, the resources are available for help. In the United States, call Gambling aid line in the national problem In 1-800-gambler, or 800gam text. Other resources may be available In your state.





https://www.cnet.com/a/img/resize/c295392946af4f650200751f429bafaf54c386cc/hub/2025/09/19/7952fff3-862d-4947-ae9a-76ac5fb3cc49/gettyimages-2235105726.jpg?auto=webp&fit=crop&height=675&width=1200

Source link

Leave a Comment