Google’s AI Mode gets visual upgrade to boost search and shopping

Google’s AI Mode gets visual upgrade to boost search and shopping

Google has added image-based results to its AI-mode search experience in the US, expanding what was a text-only tool to be more practical for users who need visual inspiration.

The update, released Tuesday, is a direct response to changes in the way people have used search tools, particularly since the rise of Openai’s ChatGPT in late 2022.

AI mode was first unfolded in May as a way to answer questions in plain language. Summary, definitions, explanations, and more have been processed. However, the format was not impressive for interior design prompts and fashion searches.

Now, when people enter a prompt like “Show me the biggest inspo in my bedroom,” Google’s AI mode returns the generated image, giving a visual edge to the search process.

Google AI mode displays shopping images from the prompt

This new feature is not just about room design and style boards. Users can enter something like “barrel jeans that are not too buggy” and immediately see the product images that can be shopped. Each image has a link that takes users directly to the retailer’s site, allowing you to purchase quickly without having to scroll through general search results.

According to Robby Stein, vice president of product management at Google Search, the shift is to serve users who “can’t explain what they want in text.” He said, “When you ask about shoe shopping, you’ll really explain shoes when people want visual inspiration, they want the ability to see what the model might be seeing.”

Stein also said users can narrow down the results of the image with follow-up prompts such as “Show more with bold prints and dark tones.” This update will make Google’s AI mode a new category in a new search-based category. Visuals drive decisions more than explanations.

Image-based search is driven by a combination of technologies. The company said it combines Gemini 2.5, Google search, lens and image search. All of these components work behind the scenes and generate and link image results based on user prompts.

Stein calls image generation a “breakthrough in possible things,” pointing to how it allows discovery beyond the plain keyword.

Meanwhile, Chinese rival Deepseek released a new experimental model on Monday called the Deepseek-V3.2-Exp. As reported by Cryptopolitan, we aim to use fewer resources to provide better performance based on the previous model, DeepSeek-V3.1 end. The company was already attracting attention last year when it dropped its R1 model out of nowhere. That version showed that it could train larger language models faster and weaker chips and still hold up.

Deepseek argues that this new model will increase the degree of efficient AI processing of large amounts of information. However, despite the hype, there are still some open questions about how safe and effective the architecture is. The announcement was made through a facial-hugging post from the AI ​​Forum.

Look where it counts. Advertise with Cryptopolitan research and reach Crypto’s most keen investors and builders.

Leave a Reply

Your email address will not be published. Required fields are marked *