News & Blog | The newest

您现在的位置是:首页 > 未分类 > 正文

未分类

Google suddenly cuts the free version of Gemini, causing an uproar. Is data used for model training

nemo2025-12-10未分类31

“Google has just reduced the daily request limit for the free version of the Gemini API from 250 to 20. Now, most of my n8n automation scripts are no longer usable. This is a blow to anyone developing small projects,” said netizen Nilvarcus.

Recently, some netizens reported that Google has tightened the restrictions on the free tier of the Gemini API: the Pro series has been cancelled, and the Flash series only allows 20 requests per day. This is far from enough for developers.


Some netizens also noticed that Google has removed the free Gemini API item from its “Bulk API Rate Limits” list. “It's completely over,” one said.


In the fierce competition of large models, Google once attracted users with free and low - cost policies. For example, in January this year, Google launched a free package of Gemini 1.5 Flash for the Gemini API, providing developers with up to 1.5 billion free tokens per day. This free package included 15 requests per minute, 1 million tokens per minute, and 1500 requests per day. Additionally, developers could enjoy a free context caching service, storing up to 1 million tokens per hour. The fine - tuning function was also completely free.

Besides the significant reduction, what angers some developers is that there was no prior notice for this policy change.

“I've always believed that there's no free lunch. But Google's current approach really doesn't work. Even if my system and use - cases are just experimental, it's really hurtful when everything suddenly stops working without any warning. Couldn't they have said when launching Gemini 3: 'By the way, with the launch of the new model, we'll cancel the developers' free API call quota in two weeks'? A responsible and trustworthy company should have done so,” said a developer.

“Yes, Google has now collected enough data and is leading its competitors, so they are changing their strategy to focus on profitability. We all knew the free package was overly generous at the beginning, but we were paying with our own data and helping them train the model,” said a developer. “The kind - hearted AI has finished attracting public - domain users and is now ready for paid conversion.”

May Face Off Against OpenAI Again This Week

Recently, Google won a large number of users with Gemini 3. According to the Financial Times, by the end of 2025, the average single - use duration of Gemini on desktop and mobile web pages had reached about 7.2 minutes, surpassing ChatGPT's approximately 6 minutes for the first time and slightly higher than Anthropic Claude's about 6 minutes.

However, the fierce competition for large models continues. It is reported that OpenAI is planning to respond to Google's Gemini 3 for the first time with the upcoming GPT - 5.2. Originally scheduled for release at the end of December, GPT - 5.2 is expected to be launched on December 9. The benchmark test results of GPT - 5.2 have also spread online. If these data are finally confirmed, the competitive advantage will return to OpenAI.


Just as the news of GPT - 5.2's release spread, netizens noticed that Gemini 3 Flash has now landed on LM Arena. Some people said, “Gemini 3 Flash seems to be Google's answer to GPT 5.2.”


This excited netizens: “It's so exciting. OpenAI and Google have completed their confrontation. Nano Banana Pro and Gemini 3 Flash are Google's backup plans to counter GPT - 5.2, which OpenAI will release this week.”

“Some parts of the AI industry may indeed have a bubble, such as the absurdly high - value seed - round financing. But I believe more than anyone that AI is the most transformative technology, and in the long run, these investments are worth it. My job is to ensure that DeepMind and Google are in the strongest position regardless of whether the bubble bursts,” said Demis Hassabis, the co - founder and CEO of Google DeepMind, during an Axios event, indicating his determination to compete with OpenAI to the end.

Google Is Satisfied with Gemini 3's Performance

In this competition, Google, which performed poorly in the early stage, won a round with Gemini 3.

“We are very satisfied with the personality, style, and capabilities of Gemini 3. I like that it answers concisely and will refute you when necessary, rather than just agreeing with everything. If your view is not very reasonable, it will gently push back. I think people can feel that this is a step - change in intelligence, making it more useful,” said Hassabis.

This reminds people of when OpenAI rolled back GPT - 4o because ChatGPT was too obsequious. It seems that Google deliberately avoided this problem.

Hassabis is glad to see users trying out various things with Gemini 3.  Once you release new technology, millions or even billions of users will immediately start using it. We are constantly amazed by the cool ways users quickly come up with. That's why we love the current era where scientific research and products are closely integrated so much.”

His personal favorite is that Gemini 3 can “single - handedly” complete game development.

“Back in the days when I first worked on game AI, I think we are now very close to being able to create commercial - grade games with models in a few hours, which used to take years. This shows the incredible depth and capabilities of the model: it can understand very high - level instructions and generate very detailed outputs. Another particularly strong aspect of Gemini 3 is front - end development and website development. It is excellent in terms of aesthetics, creativity, and technology.”

“As with all models, the innovation speed is so fast that we spend too much time building new versions and don't have time to explore even one - tenth of the capabilities of the existing models,” Hassabis said. “Every time we release a new version, I have this feeling: I haven't even had time to explore one - tenth of the existing system, and I have to immediately start working on the next - generation R & D, while also ensuring safety, reliability, etc. So, in fact, users are using them more deeply than we do internally.”

Pushing the Scaling Law to the Limit

Regarding the sudden restriction on the free Gemini API, some netizens are skeptical. “Is the computing power running out? I've been playing with Nano Banana Pro on AI Studio in recent days, and since the day before yesterday, it's been very slow. It takes ages to generate an image.” Others guessed, “With the release of the new model and the old model still in use, the computing resources are tight.”

Although the real reason is unknown, as Hassabis said, Google will always need computing power: “At Google and DeepMind, we do have a lot of resources, but they are not infinite. We always need more computing power, no matter how much we currently have. It's because of these resources that we can conduct such extensive research.”

He still advocates the Scaling Law. When answering the question “Can AGI be achieved only by improving large models and generative AI?”, Hassabis said, “We must push the scale of the current system to the limit. It will at least be a key component of AGI. It's possible that scaling alone is enough, but I guess looking back, we'll find that we still need one or two breakthroughs like the Transformer or AlphaZero.”

Hassabis believes that it will take about five to ten years to reach AGI, but his standard for AGI is very high: it must have all human cognitive abilities, including creativity and inventiveness.

He explained that current LLMs are like doctors or Olympiad champions in some aspects but are still weak in others, such as consistency, continuous learning, long - term planning, and complex reasoning. They have a jagged intelligence. They will eventually acquire these abilities, but it may require one or two major breakthroughs.

Hassabis recalled that in 2017 and 2018, Google had many projects: its own language model Chinchilla, the in - house Sparrow, and the team was the first to discover some scaling laws, namely the Chinchilla Scaling Law; there were also other directions, such as AlphaZero based on AlphaGo, a pure reinforcement learning system, and architectures inspired by cognitive science and neuroscience. “At that time, we weren't sure which path would lead to AGI the fastest and safest. My task was to build AGI.”

“I'm actually very practical about the path: it has to work. When we saw that scaling was really starting to work, we continuously invested more resources in that R & D branch,” Hassabis said. “This is the beauty of the scientific method. If you're a real scientist, you can't dogmatically stick to your own ideas but must follow empirical evidence.”

The Advantage of Being a Scientist

As a scientist, Hassabis's default way of dealing with all problems is the scientific method. He believes that the scientific method may be one of the most important ideas in human history because it gave rise to the Enlightenment, modern science, and built modern civilization. The experimental spirit, hypothesis updating, and evidence - driven nature of the scientific method are extremely powerful ways of thinking, and it applies not only to science but also to daily life and even business.

“We're in what might be the most intense competition in the history of technology, but we stand out with rigor and precision, and the scientific method is at the core of our work. We combine top - notch research, top - notch engineering technology, and top - notch infrastructure. In the frontier of AI, you must have all three. I think there are not many institutions with world - class capabilities in all three areas, and we are one of them,” Hassabis said. “I always use it to the fullest, and I think this is also our advantage as a research institution and an engineering team.”

Regarding the competition for AI talent, Hassabis said bluntly, “It's really crazy recently, like what Meta is doing.” But he said that Google is looking for “mission - driven” people. “DeepMind has the best mission and full - stack capabilities. If you want to do the most impactful work, this is the best place. The best scientists and engineers want to work on the most cutting - edge systems, which in turn attracts more top - notch talent.”

Three Main Directions for Google's Future

As one of the leaders in the global AI large - model field, Google's development directions are worthy of attention in the industry.

According to Hassabis, Google is working hard in three directions.

Firstly, modality fusion. Gemini has been a multi - modal model from the start, capable of receiving images, videos, text, and audio, and is now increasingly able to generate content in these modalities. Google is seeing the mutually - reinforcing effects across modalities. An example is the latest image model, Nano Banana Pro, which shows amazing visual understanding capabilities and can generate very accurate infographics. Hassabis believes that in the next year, we will see very interesting combinations of capabilities in the fusion of video and language models.

Among the technologies that Google is researching and has put into use, Hassabis thinks that the multi - modal understanding capabilities of these models, especially the multi - modal processing capabilities of video, image, and audio (with a particular emphasis on video processing), are amazing and under - appreciated.

“If you ask Gemini to process a YouTube video, you can ask it all kinds of questions, and I'm often shocked by its conceptual understanding of the video content. Although it doesn't understand perfectly every time, in most cases, its performance is impressive.”

Hassabis took his favorite movie, Fight Club, as an example. In one scene, someone takes off their ring before a fight. He asked Gemini the meaning of this action, and it gave a very interesting philosophical interpretation: this action symbolizes breaking away from daily life and showing an attitude of letting go of worldly constraints. “This kind of deep meta - cognitive insight is one of the powerful capabilities these systems currently possess.”

Additionally, Google has a function called Gemini Live. You can point your phone at an object, for example, tell your phone “You're a mechanic,” and it can help you handle relevant tasks in front of you. Ideally, this function should be applied to devices like glasses so that you can free your hands. But Hassabis thinks people haven't fully realized the power of this multi - modal capability yet.

Secondly, the world model. Hassabis is personally driving the development of this area. “We have a system called Genie 3, which is an interactive video model. You can generate a video and then enter it like entering a game or a simulated world, and it can maintain coherence for about a minute. This is very exciting.”

Finally, the agent system. Hassabis pointed out that current agents are not reliable enough to complete full - scale tasks, but there will be significant progress in the next year.

“We have a vision called the universal assistant, and we hope Gemini will eventually become one. You'll see it enter more devices in the next year,” Hassabis said. By “universal,” it doesn't just mean computers, laptops, or phones, but also could be glasses or other devices.

“We want to create an assistant that can help you every day, an assistant you'll consult multiple times a day, becoming a part of your life, improving your work efficiency, and enhancing your personal life, such as recommending books, movies, or activities you like. However, current agents can't yet be fully entrusted with a complete task, and you can't be confident that they'll complete it reliably. But I think in a year, we'll see agents that are close to being able to do this.”

Hassabis also mentioned that as agents become stronger and more autonomous, they will be more useful, so all industries will definitely build them. But the more autonomous they are, the more likely they are to deviate from your original instructions or goals. Therefore, ensuring that continuously - learning systems stay within the boundaries you set is a very active research area.

He said the good news is that AI now has great commercial value. If you sell agents as a model - providing enterprise, those enterprises will require you to provide guarantees of reliability, data processing, and customer behavior. If something goes wrong, it won't be “extinction - level,” but you'll definitely lose business. Enterprises will choose suppliers that are more responsible and provide stronger guarantees. So, capitalism itself will, to some extent, encourage more responsible behavior. Of course, if not done properly, they may still deviate from the boundaries. The probability is not zero, and this is one of the biggest uncertainties. Since the probability is not zero, it must be taken seriously and resources should be invested to mitigate it.

Moreover, in an interview, Hassabis also mentioned that in the global competition, he believes the United States and the West still lead at present, but China is not far behind. The latest DeepSeek or other models are very strong, with very capable teams. “The leading advantage may only be 'a few months' rather than 'years'.”

Hassabis said that after excluding the chip factor, the West still has an advantage in AI algorithm innovation. “Chinese teams are very good at quickly catching up with the current state - of - the - art methods, but so far, there hasn't been a similar breakthrough in proposing completely new algorithms beyond the existing frontiers.”

Reference Links:

https://www.youtube.com/watch?v=tDSDR7QILLg

https://x.com/legit_api/status/199779253807443


发表评论

评论列表

  • 这篇文章还没有收到评论,赶紧来抢沙发吧~