Google is releasing three new AI experiments on Tuesday, which aims to help people speak a new language in a more personal way. While the experiments are still in the early stages, it is possible that the company is carrying Genini, Google’s multimodal with the help of large language models.
The first experiment helps you learn the specific phrases quickly that you need at the moment, while the second experiment helps you make more sound like less formal and local.
The third experiment allows you to use your camera to learn new words based on your surroundings.

Google notes that one of the most disappointing parts of learning new language is that when you find yourself in a situation where you need a specific phrase that you have not yet learned.
With the new “Tiny Lesson” experiment, you can describe a situation, such as “finding a lost passport,”, reference to achieve terminology and grammar tips. You can also get suggestions for reactions like “I don’t know that I have lost it” or “I want to report it to the police”.
Next experiment, “Slang Hang,” wants to help people reduce people like a textbook when speaking a new language. Google says that when you learn a new language, you often learn to speak formally, which is why it is experimenting with more colloquially speaking and teaching with local slang.

With this feature, you can generate a realistic interaction between native speakers and see how the dialogue once reveals a message. For example, you can learn through a conversation where a road seller is interacting with a customer, or a situation where two long lost friends meet again on the metro. You can hover over the words that you are not familiar with what they mean and how they are used, you are not familiar with.
Google says that the experiment sometimes misuses some slang and sometimes makes words, so users need to cross-refer to them with reliable sources.

The third experiment, “Word Cam”, lets you snap a picture of your surroundings, after which Gemini will detect objects and label them in the language you are learning. The feature also gives you additional words which you can use to describe the objects.
Google says that sometimes you need words for things in front of you, because it can show you how much you don’t know yet. For example, you can know the word for “window”, but you cannot know the word for “blind”.
The company notes that the idea behind these experiments is to see how AI can be used to make independent learning more dynamic and personal.
New experiments support the following languages: Arabic, Chinese (China, Hong Kong, Taiwan), English (Australia, US), French (Canada, France), German, Greek, Hebrew, Hindi, Italian, Japanese, Korean, Portuguese (Brazil, Portugal), Rasian, Spanish, Spanish (Latin America, Spainin, Spainin, Spainin, Spainin, Spainin, Spainin, Spainin, Spainin, Spainin, Spainin, Spainin, Spainin, Spaninin, Spainin, Spainin, Spaninin, Spaninin, Can be accessed through America, Spainn, Equipment Google Labs,