Artificial Intelligence

Google Adds Automated Workflows to Opal

by Sakshi Dhingra - 13 hours ago - 4 min read

Google has rolled out a major upgrade to Opal, its Google Labs “vibe-coding” mini-app builder: a new agent step that lets users create automated, multi-step workflows from plain-English prompts—then plan, route, and execute those workflows with far less manual setup.

What changed in Opal

Until now, Opal workflows were largely static chains of model calls, you defined the steps and picked models manually. With the update, users can choose an agent inside the “generate” step, and Opal will have that agent figure out the path to reach the objective, selecting tools/models as needed.

Google describes the shift as moving Opal from “static” workflows to interactive experiences, where the workflow can adapt mid-run.

The model behind it: Gemini 3 Flash

Google says the new agent capability is powered by Gemini 3 Flash.

“It picks the tools for you” (and Google names examples)

A key point in Google’s announcement: the agent step can automatically trigger the right tools for a job—Google explicitly mentions tools such as:

  • Web Search (for research)
  • Veo (for video generation)

This reduces the “configuration tax” typical in workflow builders where users must pre-wire every integration.

The 3 new capabilities that make workflows “agentic”

Google (and third-party coverage) highlight three core upgrades that turn Opal workflows into something closer to “mini apps that run themselves.”

1) Persistent memory (including Google Sheets as a memory layer)

Opal agents can now remember information across sessions—like a user’s preferences or a running list. Google gives examples such as storing brand identity/preferences for repeatable ideation, and TechCrunch cites an example where the agent can use Google Sheets to maintain memory, such as a shopping list for an e-commerce-related app.

Why it matters: memory is what turns a one-off workflow into a reusable assistant that improves over time.

2) Dynamic routing (branching workflows)

Opal now supports dynamic routing, meaning you can define multiple paths and the agent can transition to the correct next step based on logic/conditions. Google’s example: an Executive Briefing flow that changes behavior depending on whether you’re meeting a new vs. existing client (e.g., web research for new clients vs. internal notes for existing context).

3) Interactive chat (follow-ups while executing)

Opal agents can ask clarifying questions or offer choices during execution if the prompt or inputs are incomplete. Google illustrates this with creative flows like room redesign, where the agent iterates via dialogue to refine outputs rather than delivering a single “one-shot” result.

Examples Google used to explain the upgrade

Google’s own post leans heavily on concrete “before vs after” examples:

Storytelling: from rigid templates to adaptive narratives

Previously, a storybook workflow required you to predefine things like page counts and user questions. Now, Google says you can build a “Visual Storyteller” where the agent decides what details it needs, suggests plot points, and shapes the narrative dynamically.

Interior design: from one-way output to back-and-forth collaboration

Google describes an interior design flow that becomes more collaborative: upload a room photo, describe a vision, get a concept, then refine it through iterative feedback—where the agent can even research niche sub-styles to better match intent.

Availability and rollout context

Google announced the agent step on February 24, 2026, and coverage says it’s available to Opal users as part of this upgrade.

This update also builds on Opal’s fast expansion timeline:

July 2025: Opal introduced for U.S. users

October 2025: expanded to 15 more countries

November 2025: expanded to 160+ countries

December 2025: added into the Gemini web app, enabling no-code custom apps via a visual editor

Why this matters

Opal’s agent step is Google’s clearest signal yet that it’s chasing a new category: AI-native workflow apps, where “describe the goal” becomes the UI, and the system handles tool selection, planning, and iterative clarification.

TechCrunch explicitly positions Opal in the same broader wave as natural-language app builders like Lovable and Replit (and others), but Google’s differentiation is the ability to natively pull in Google-ecosystem tools and models (like web search, Sheets, and Veo) with minimal friction.