Born to flex, diamonds on the neck—there’s nothing AI agents like more than (no) checks (on their power). Yes, yes, we know, AI agents can’t “like” anything. But as the technology is given expanded permissions to shop for users and organizations, we could see attackers manipulate agentic access. Let me in. It comes down to permissions, according to Jordan Mauriello, CTO at SHI. Major providers like Mastercard and Visa, as well as online systems like Google, have given AI agents the ability to make purchases on behalf of users. But sellers haven’t necessarily restricted what those agents can do. “People will try to solve a very specific problem with agentic AI,” Mauriello said. “They’ll open up permissions to solve that problem and then forget to go back and lock those permissions down, and now the agent has access to things that maybe it should not have access to.” What’s also important, Torii CEO Uri Haramati told IT Brew, is identity access management. Agents are hopefully acting with some form of limited access capacity—knowing which human to alert if something goes wrong is helpful. Keep reading on IT Brew.—EH |