Site icon Back End News

Google introduces Gemini 3 AI model with expanded reasoning features

Google Gemini 3

Google introduced Gemini 3, its latest artificial intelligence (AI) model designed to improve how people learn, plan, and build using AI across Google products. 

Gemini 3 Pro, now in preview, is being rolled out across select Google tools so users can test its wider capabilities in everyday tasks. Google also unveiled Gemini 3 Deep Think, an optional mode that provides more detailed reasoning for complex questions. Deep Think is initially available only to safety testers before the company releases it to Google AI Ultra subscribers.

According to Google, Gemini 3 Pro delivers stronger performance across multiple AI benchmarks compared with the previous Gemini 2.5 Pro model. The company highlighted results from public tests in language understanding, problem solving, mathematics, and multimodal tasks that involve text, images, and video. These scores show that Gemini 3 Pro can process more types of information and handle a more set of questions, including technical topics.

Gemini 3 Pro is built to handle tasks that involve long or detailed reasoning, such as analyzing documents, reviewing complex instructions, or working through multi-step problems. Google said this version can understand scientific and mathematical concepts and produce clearer explanations. It can also convert information into charts or visualizations when needed.

The model is designed to give more direct answers and avoid overly positive or flattering responses. Google said this change is meant to help users get clearer guidance, especially when working on research, planning, or creative work. The company noted that Gemini 3 Pro is intended to act as a support tool that can help people think through ideas rather than simply generate long or decorative responses.

Google also emphasized safety improvements in Gemini 3. The model went through expanded internal testing to reduce risks such as prompt injection, inaccurate answers, and outputs that could be misused for cyberattacks. Google worked with outside reviewers, including academic experts and independent evaluators, to test the model’s behavior in sensitive areas. The company said these steps are aligned with its Frontier Safety Framework, which outlines how advanced models should be reviewed before wider release.

Exit mobile version