On Tuesday, Google launches three new experiences of artificial intelligence aimed at helping people learn to speak in a new language in a more special way. Although the experiments are still in the early stages, the company is likely to look forward to facing Duolingo with the help of Gueini, the Great Language Model in Google.
The first experience quickly helps you to learn specific phrases you need at the present time, while the second experience helps you look less formal and more similar to the local.
The third experience allows you to use the camera to learn new words based on your surroundings.

Google indicates that one of the most frustrated parts of learning a new language is when you find yourself in a position in which you need a specific phrase you have not yet learned.
By experimenting with the new “Little Lesson”, you can describe the situation, such as “finding a lost passport”, to receive vocabulary advice and rules specifically designed for context. You can also get suggestions for responses such as “I don’t know where you lost it” or “I want to report it to the police.”
You want the following experience, “Glang Hang”, helping people look less similar to the textbook when speaking in a new language. Google says that when you learn a new language, she often learns to speak officially, which is why she is trying to teach people to speak colloquially, and with local colloquialism.

Through this feature, you can create a realistic conversation between the original speakers and know how the dialogue reveals one message at the time. For example, you can learn through a conversation where the street seller chats with a customer, or a situation in which two guaranteed friends gathered for a long time on the subway. You can hover on the conditions you do not know to find out what it means and how you use.
Google says the experiment is offended the use of some colloquial languages and sometimes the words are presented, so users need to circulate them with reliable sources.

The third experience, “Word Cam”, allows you to take a picture of your surroundings, after which Gemini will discover the objects and name them in the language you learn. The feature also gives you additional words that you can use to describe objects.
Google says that sometimes you only need words for the things that are in front of you, because it can show you how much you know you yet. For example, you may know the word “window”, but you may not know the word “curtains”.
The company notes that the idea behind these experiences is to know how to use artificial intelligence to make independent learning more dynamic and personal.
New experiences support the following languages: Arabic, Chinese (China, Hong Kong, Taiwan), English (Australia, UK, United States), French (Canada, France), German, Greek, Indian, Indian, Italian, Japanese, Korean, Portuguese (Brazil, Portuguese), Russian (Latin America). Tools can be accessed across Google Laboratory.
https://techcrunch.com/wp-content/uploads/2025/02/GettyImages-2199793091.jpg?w=1024
Source link