OpenAI has introduced ChatGPT Agent; Sam Altman warns of risks

Copilot[German]OpenAI released its "ChatGPT agent" on July 17, 2025. In a longer post on X, OpenAI boss Sam Altman takes this up. At the same time, Altman warns of potentially major risks when using this ChatGPT agent. Everything is still experimental.

AI agents are software programs that autonomously perform tasks on behalf of a user or another system based on "artificial intelligence". The agents can independently make decisions, plan and execute tasks, and interact with their environment or other agents, often using large language models (LLMs).

Sam Altman on the ChatGPT Agent

DOpenAI now also has its own "ChatGPT Agent" as a product, and OpenAI boss Sam Altman addressed its release on July 17, 2025 in a tweet on X.

ChatGPT Agent announcement

Altman sees the ChatGPT agent as a new level of performance for AI systems that can supposedly perform complex tasks for its users. Sounds good, but the real world looks sobering to me.

When people ask Microsoft and other protagonists what they can do with these things, they tell them that they can book a trip or a table in a restaurant. And so Sam Altman also comes up with the "happy news" that a demo for preparing a friend's wedding will be shown at the launch of ChatGPT Agent as an example. It's about buying an outfit, booking a trip, choosing a gift, etc.

This of course has an "innovative" character, after all – according to these people – we are involved in such tasks every day and would really be lost in life without an agent. To Altman's credit, I must say that he also mentions an example of analyzing data and creating a presentation for work. But again, this is an area where most people tend not to be out and about.

Altman's warning about risks

However, I was surprised by Sam Altman's comments on the risks of using the ChatGPT Agent. Although Altman writes that agents can have great benefits, they are also associated with major potential risks.

Altman sees the ChatGPT Agent as experimental. Although many security precautions and warnings have been built in, not all scenarios can be foreseen. Altman, who is not a pure marketing man who negates everything that causes problems, warns. Malicious actors could try to trick users' AI agents so that the software inadvertently reveals private information and/or performs actions that are not intended and cannot be predicted.

Dann folgt eine fette und sehr deutliche Warnung. Altman schreibt, dass er seiner Familie  den ChatGPT Agenten als innovativ und experimentell erklären würde, als eine Chance, die Zukunft auszuprobieren. Aber das nicht etwas, was er für wichtige Anwendungen oder mit vielen persönlichen Daten verwenden würde. Oder in anderen, platteren Worten: Das Ganze ist ein Spielzeug, was man bei viel Zeit ausprobieren können, aber tunlichst aus dem geschäftlichen Bereich weg lassen solle.

Then comes a bold and very clear warning. Altman writes that he would explain the ChatGPT agent to his family as innovative and experimental, as a chance to try out the future. But not something he would use for important applications or with a lot of personal data. Or in other, flatter words: The whole thing is a toy that you can try out if you have a lot of time, but should be kept out of the business area as far as possible.

OpenAI recommends granting agents only the minimum access to data required to perform a task to mitigate privacy and security risks. He envisions people adopting these tools cautiously and slowly, while OpenAI and other developers can better quantify and mitigate the potential risks involved. As with other new capabilities, society, technology and risk mitigation strategy must evolve together, he says.

My thoughts on this

When I read Altman's tweet, I was smacked. Someone weighs up and writes: We have something new but don't yet know what the risks are, but the approach may be useful. Test the stuff, but use it with extreme caution and deliberation. That's not at all in line with Microsoft's puffery mentality, which pushes such features on people at the drop of a hat.

And Altman addresses what I've been arguing for months: LLMs can make sense if I have their use under control, limit the purpose of their use and have the risks under control. Whereby I am of the opinion that I will simply never get a grip on the risks – what the agent sees of documents and processes, it will process and, in the event of successful attacks, also pass on to unauthorized third parties.

This entry was posted in Security, Software and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

Note: Please note the rules for commenting on the blog (first comments and linked posts end up in moderation, I release them every few hours, I rigorously delete SEO posts/SPAM).