
Google has a new AI model Gemini 2.5 flash. According to the news this software is not safe for consumers. However, its predecessor Gemini 2.0 flash has top not security and safety regulations.
The Newer version has much risk of creating explicit content. It gives users access to. Content that is inappropriate and harmful. According to the benchmark within the company, the Gemini 2.5. Flash it’s showing security risks in. Text based as well as image based safety areas.
The previous version does not do so. This version is built on more like following the users orders. Whatever command it receives it follows blindly and fulfils it.
These evaluations are made after doing both test to test and image to test security safety trial. The AI model could not fall on the guidelines of both of these tests. There is no human involved in this trial because the trial is based on automatic testing.
After evaluating Google has reported in its weekly report that Gemini 2.5 flash is not performing very well in text to text and image to text commands.
On the other hand where AI companies are working day and night for the betterment of their softwares, the Gemini model’s testament does not seem to be working really well for Google.
While developing a certain AI model companies try to make the models in such a way that they do not respond to prompts like controversial topics, political topics and explicit content like sexual topics.
On the other hand, open AI inserted some commands on ChatGPT in such a way that the AI model can answer some of the political debates. But this feature completely backfired on the open AI because the model starts to generate content that is explicit.
Resulting the minor or under age users Were engaging in sexual conversations with the open AI’s ChatGPT. The company effortlessly blamed all this upon a bug. Which was just a cover-up.
In its technical report, Google states that Gemini 2.5/is following orders more faithfully and loyalty than the Gemini 2.0 flash. In this loyalty the model fails to differentiate between problematic and explicit limits of certain issues.
However, the company is going to fix it as soon as possible, but until then the model can generate explicit content when you ask it to. That is why many underage teens or minors are engaging in sexual conversations which are prohibited in the AI models normally.
It was not easy for the company to address the situation. After publishing the report which includes key safety reasons on which Gemini 2.5 flash fails. The tech joint will try to eliminate the problem.