\ We need to be honest: The "Chatbot" interface is getting old.
For the last two years, we’ve been stuck in a loop. You type a prompt, wait for the cursor to blink, get a wall of text, realize it missed a requirement, copy-paste the error back in, and repeat. It’s not "Artificial Intelligence"—it’s Artificial Babysitting.
But the leaks coming out of Anthropic this week suggest that the era of the "Chatbot" is officially ending.
Anthropic is quietly testing a new "Tasks" Mode for Claude, and it fundamentally changes how we interact with LLMs. It’s no longer about talking to the machine. It’s about assigning work to it.
If you’ve been waiting for the "Agentic Future" we were promised, this is the UI update that actually delivers it.
According to reports from TestingCatalog, Anthropic is testing a dedicated "Agent Mode" toggle that replaces the standard chat window with a structured dashboard.
Instead of a blank "How can I help you?" box, you are greeted with five distinct workflows:
The most critical update isn't the modes; it's the Sidebar.
In the leaked screenshots, there is a persistent Progress Tracker on the right side.
This solves the biggest problem with ChatGPT and current Claude: Loss of State. We’ve all had a session go on too long until the model forgets the very first constraint we gave it. By visualising the "Task Queue," Anthropic is giving Claude a long-term memory we can actually see.
We are witnessing the transition from LLMs (Large Language Models) to LAMs (Large Action Models).
Google is rumored to be working on "Jarvis." OpenAI is working on "Operator." But Anthropic seems to be the first to put a usable User Interface on it.
For developers, this changes the game.
The Old Way (Chat Mode):
The New Way (Tasks Mode):
The friction of execution is being offloaded to the AI.
Of course, this is terrifying.
If you’ve used Anthropic’s "Computer Use" API (where Claude actually moves your mouse), you know it’s prone to hilarious failures. It might get stuck in a loop clicking a pop-up ad, or accidentally delete a file because it "thought" it was cleaning up.
In Tasks Mode, if we stop verifying the intermediate steps, we risk "Compound Hallucination."
If Step 1 is slightly wrong, Step 5 will be catastrophic. The "Progress Tracker" sidebar gives us a false sense of security. Just because the AI checked the box doesn't mean it did the job well.
The skill set for 2026 isn't "Prompt Engineering." It's Task Engineering.
You won't need to know the perfect magic words to get a good poem. You will need to know how to break a complex system into atomic, verifiable tasks that an Agent can execute without burning your house down.
Claude’s new Tasks mode is just a UI update, but it’s a signal: The days of the lonely text box are numbered.
Get ready to become a Manager.
Liked this breakdown? Smash that clap button and follow me for more leaks from the Agentic AI revolution.
\ \


