
Anthropic just redrew the boundary on who gets to use their AI and how. The beauty industry crossed $500 billion on the back of AI-driven discovery. And the biggest retail conference of the year made one thing clear: the industry knows agents are coming, but too many companies are still confusing a smarter search bar with actual autonomous action.
This edition is a cross-industry look at what actually happened this week.
— Astrid
COVER STORY
Anthropic Just Cut Off Third-Party AI Agents. Here's How it Affects You.
Effective April 4, Anthropic ended Claude Pro and Max subscribers' ability to power third-party AI agent tools through their subscription, starting with OpenClaw. The notice was less than 24 hours. Enforcement went live at noon Pacific, and most of the developer community found out through an X post from Boris Cherny, Head of Claude Code, the night before.
OpenClaw is an open-source AI agent framework with over 247,000 GitHub stars. It lets you run autonomous agents that handle emails, calendars, Slack, WhatsApp, Telegram, and more, all connected to your Claude subscription. Power users had been routing entire daily workflows through it at flat-rate pricing. A Claude Max subscriber paying $200 a month could pipe unlimited Opus requests through an automated agent all day. At API rates, that same usage could cost thousands.
Here's what's still working: direct Claude Code use in your own terminal under Pro, Max, Team, or Enterprise is unchanged. The CLI is still yours. What changed is routing that CLI through a third-party harness to run automated agent swarms at subscription pricing. Anthropic is offering a one-time credit equal to one month's fee, 30% off pre-purchased extra usage bundles, and a full refund option. The policy will extend to all third-party harnesses in the coming weeks.
OpenClaw's creator Peter Steinberger, who moved to OpenAI in February, called it a betrayal of open-source developers. He and board member Dave Morin pushed back, and only succeeded in delaying enforcement by a week. Community consensus on Reddit leaned toward Anthropic being right on the policy. The frustration is the 24-hour notice, not the decision itself.
ASTRID’S TAKE
A lot of AI productivity gains from the last 18 months were built on subsidized compute. Anthropic was eating the difference between what you paid and what your agent workloads actually cost in order to grow adoption. With that subsidy gone from one of the top performing LLMs, the question for every small business and solo operator running AI workflows right now: do you actually know what your stack costs at full API rates? If the answer is no, this week is the week to find out. See our How to Compute Compute section on how you can limit your costs.
DID YOU KNOW?
You can book Astrid for your next company keynote:
The 150 MPH Decision: Where Human Instinct Meets Machine Speed is a power-packed AI high performance keynote for any industry navigating what comes next. Entertaining, actionable, and tailored to your audience.
Trusted by Global leaders from CES to the UN stage. Built for enterprise all-hands, retreats, and high-impact industry events. A few dates remain in 2026. Visit AstridPilla.com to book.
And now a word from our sponsor
Your first HR system, implemented right
Rolling out your first HR tool? Get a step-by-step guide to avoid common mistakes, drive adoption, and build a scalable HR foundation.
A WORD ABOUT OUR SPONSORS- When you click inside the ad above, we might earn a commission. Don’t hesitate to click and explore as this will help us produce this newsletter free! You're support means a lot!
POLICY AND REGULATORY
States Aren't Waiting for Washington. And Now OpenAI Isn't Either.
Tennessee's governor signed SB 1580 this week, prohibiting the deployment of any AI system that represents itself as a qualified mental health professional. The vote was 32-0 in the Senate and 94-0 in the House. Bipartisan consensus on AI safety legislation is moving faster at the state level than at the federal level.
Georgia has three AI bills on the governor's desk ahead of the April 6 adjournment, covering chatbot disclosure, child safety, and the prohibition of AI-only healthcare coverage decisions.
Nebraska is attaching an AI chatbot safety bill to a popular agricultural data privacy act.
South Carolina, Alabama, and Wisconsin all have active AI bills in play.
78 chatbot bills are currently alive in 27 states. If you operate in regulated industries or use AI in hiring, healthcare decisions, or customer-facing roles, state-level compliance monitoring now belongs inside your governance structure. We’re no longer at the "Washington will figure it out" stage of AI.
On the industry side, OpenAI published a 13-page policy paper this month called Industrial Policy for the Intelligence Age. I was honestly surprised by the level of specificity from them. There's a Public Wealth Fund proposal that would give every citizen an equity stake in AI-driven growth, portable benefits that follow workers across jobs instead of dying with their employer, and a "Right to AI" framed the same way we once framed the right to electricity. Whether any of it gets implemented is a different conversation. But OpenAI is putting specific ideas on the table and asking to be held to them. It’s worth watching closely. I’m excited to contribute to ensure women are included in the conversation. More on that soon.
SPEED ROUND
Five Stories. Five Industries.
Beauty
The global beauty market crossed $500 billion, and AI drove it there. NielsenIQ's State of Beauty 2026 report, released March 31, shows the market grew 10% year-over-year. E-commerce is expanding six times faster than in-store sales. More than half of consumers are now using AI-enabled shopping tools, and 49% are already receiving beauty product recommendations directly from generative AI. The path to purchase in beauty is no longer a shelf or a store. It's an interactive conversation.
Retail / Shoptalk 2026
10,000 retail leaders just spent three days in Las Vegas debating agentic commerce, and the honest takeaway is that urgency is high and clarity is low. Shoptalk 2026 ran March 24-26 with Sephora, Coach, Gap, Klarna, Novi, Meta, and OpenAI all on the main stage. Novi's CEO made the sharpest point: AI agents are a new storefront, and if your brand isn't discoverable there, it risks becoming invisible. Gap called out AI for fit and sizing as its number one priority, targeting the highest-friction point in the purchase journey. The Stripe recap from the week put it plainly: for all the momentum, most retailers are still working out their agentic strategy, with a persistent gap between what they see happening now and what they can actually plan for.
Real Estate
Zillow launched "AI Mode" on March 25. It's not quite what we were watching for. The feature lets buyers and renters search by natural language, get affordability guidance, estimate renovation costs, compare neighborhoods, and schedule tours. It remembers preferences across sessions and has a Fair Housing Classifier built in as a real-time guardrail. Investment firm William Blair's read: not significantly differentiated from CoStar's competing Homes AI, but a genuine improvement to the search experience. When we said real estate was a prime vertical for AI agents, we meant something that acts on your behalf, the way Bumble's Bee was designed to reach out, schedule, and negotiate without you. What Zillow launched is a smarter search bar with memory. Useful. Not an agent. The industry is still collapsing "conversational AI" and "autonomous action" into the same sentence, and that confusion has a real cost for anyone making vendor decisions right now.
Enterprise / JPMorgan Chase
JPMorgan Chase just started tracking whether its 65,000 engineers are using AI. That's the signal every executive needs to know. JPMorgan has instructed its technologists to incorporate AI tools like ChatGPT and Claude Code into their daily workflows, and internal systems are now classifying employees as "light users" or "heavy users" based on actual usage data. This is now a policy at one of the largest financial institutions in the world. When JPMorgan starts measuring AI adoption the same way it measures performance, the message to every other organization is clear: AI literacy is no longer optional, it's a must and it will be tracked. If your company doesn't have a similar conversation happening at the leadership level right now, it's worth putting on the next meeting agenda.
Anthropic
The "Safety-First" Company Leaked Its Own Blueprints. And What Was Inside Is Worth Talking About. On March 31, Anthropic accidentally published 512,000 lines of Claude Code's source code to the public npm registry, including 44 hidden feature flags for unreleased capabilities. This came three days after separately leaking nearly 3,000 internal files, including draft blog posts about an upcoming model called Mythos. Anthropic confirmed both were "human error."
Here's what developers found inside. Claude Code scans your prompts for swear words and phrases like "this sucks" and "so frustrating." Anthropic's team calls it internally the "f***s chart" and uses it as a dashboard to measure whether users are having a good experience. There is also an "Undercover Mode": when Claude Code contributes to public or open-source repositories, it’s instructed to scrub any reference to Anthropic or Claude from commit messages and pull requests, so the work appears to have been written entirely by a human. One developer called it "a one-way door." Anthropic says it is a leak-prevention tool for internal codenames. Critics say it normalizes unattributed AI contributions to open-source software. I can see the validity of both… Even I keep my Claude Code assisted commits to github labeled as “Co-Authored by Claude Code”.
YOUR WEEKLY POWERUP
Give this to your Go-To AI Today
Know What Your AI Stack Actually Costs
"You are my AI Chief of Staff. I run a small business and I have AI tools connected to API keys that I may not fully understand the cost of. I want you to act as my AI cost auditor this week. Ask me what tools I'm running, what models they connect to, and how often they fire. Then build me a simple cost estimate and tell me where I'm likely overpaying for compute I don't need. Be direct. Do not soften the numbers. Then tell me the three highest-impact changes I could make this week to cut my AI costs without cutting my output. Treat me like a CEO making an infrastructure decision, not a beginner who needs hand-holding."
On the Radar
HumanX 2026 just ran April 6-9 in San Francisco. Described as the Davos of AI for enterprise leaders, it's where CTOs, VPs, and executives are working out how to actually integrate AI into their organizations.
Coming up: Google Cloud Next is April 22-24 in Las Vegas, focused on agentic systems and Gemini enterprise deployment. IBM Think follows May 4-7 in Boston, with sessions on agent architectures and real-world deployments from Citi, Eli Lilly, and Bloomberg. Both worth your calendar.

