Google Cloud Surfaces Gemini 3.2 Flash Lite Model with Inference Costs 95% Lower Than GPT-5.5

According to Beating.AI monitoring, a new model option named gemini-3.2-flash-lite-live-preview has appeared in Google Cloud’s model selection list as of May 17. The “lite” and “live” suffixes indicate Google is creating a specialized version optimized for ultra-low-latency real-time interactions.

Abacus.AI CEO Bindu Reddy previously disclosed that Gemini 3.2 Flash achieves 92% of GPT-5.5’s coding and reasoning capabilities while keeping inference costs at just 1/20th of GPT-5.5’s, with most queries returning responses below 200 milliseconds. Industry observers expect this cost-optimized lightweight model to be formally unveiled at Google I/O on May 20.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments