Paul Krill
Editor at Large

Google opens access to 2 million context window of Gemini 1.5 Pro

news
Jun 28, 20242 mins
APIsArtificial IntelligenceGenerative AI

The company also has enabled code execution for Gemini 1.5 Pro and Gemini 1.5 Flash, allowing the models to generate and run Python code and learn from the results.

Google Googleplex
Credit: Shutterstock

Google has opened access to the 2-million-token context window of the Gemini 1.5 Pro AI model. The company also is giving developers access to code execution capabilities in the Gemini API and making the Gemma 2 model available in Google AI Studio.

These announcements were made on June 27. The 2-million-token context window of Gemini 1.5 Pro, previously available via a waitlist, now is available for all developers.

Google noted that as the context window grows, so does the potential for input cost. To help developers cut costs for tasks that use the same tokens across multiple prompts, Google has introduced context caching in the Gemini API for Gemini 1.5 Pro  and Gemini 1.5 Flash.

To unlock capabilities for math and data reasoning for developers, Google also has enabled code execution in Gemini 1.5 Pro and Gemini 1.5 Flash. The code execution feature allows the model to run Python code and learn iteratively from results until a final desired input is reached. This is considered a first step forward with code execution as a model capability. It is available via the Gemini API and in Google AI Studio under “advanced settings.”

Google also said it has made the Gemma 2 model available in Google AI Studio for experimentation, and it is working to offer tuning for Gemini 1.5 Flash to all developers. Text tuning in Gemini 1.5 Flash now is ready for red teaming and will be rolled out gradually to all developers. All developers will have access to Gemini 1.5 Flash tuning through the Gemini API and in Google AI Studio by mid-July, the company said.