Orply.

Saltbox One Uses Vercel to Run a Context-Aware Salesforce Agent

Jacob ParisShane SmythVercelThursday, May 7, 202612 min read

A Vercel presentation from Saltbox Management’s Shane Smyth makes the case for Saltbox One as an enterprise Salesforce agent built for implementation work, not a generic chat layer. Smyth argues that production Salesforce tasks require project and org context, authenticated tools, model routing, sandboxed execution and explicit human approval before writes. The product he describes uses Vercel’s AI SDK, AI Gateway, Fluid Compute, Sandbox MicroVMs and v0 to let the same chat surface summarize meetings, generate stories, inspect orgs, produce Salesforce code and deploy validated changes.

Saltbox One is built around Salesforce work that needs context

Shane Smyth described Saltbox One as a product Saltbox Management began building for itself while continuing its main work as a Salesforce services firm. Saltbox spends most of its time implementing Salesforce technologies for customers, and the product grew out of that services context: make Salesforce easier to use for business users, and easier to build on for implementation teams.

The problem is two-sided. Business users want to interact with their Salesforce ecosystem in plain language rather than navigating screens. Builders want an agentic interface that can help design, generate, deploy, and validate Salesforce work. Saltbox One puts both use cases in the same chat surface, sharing context across business and builder workflows.

The core distinction is between a generic LLM answer and production Salesforce work. Salesforce projects require the system to understand the customer’s environment, the project context, existing customizations, and what “best practice” should look like in that org. Saltbox’s view is that a generic prompt about Salesforce may return something useful, but not something ready for a production implementation.

You can't just go out onto, you know, any LLM and just ask a question about Salesforce, it probably will get you something, but it won't get you something that is production-ready.

Shane Smyth · Source

The platform reads from meetings, code repositories, Jira, Salesforce, Drive, and other sources, then writes back artifacts such as stories, acceptance criteria, test cases, Apex, Lightning Web Components, scope documents, and status reports. It is intended to operate across the customer’s stack, not merely advise. It can push to scratch orgs, write Jira items, deploy code, and sync with Zephyr.

That operating model depends on Saltbox’s services knowledge being embedded into the agent. During the demo, Smyth returned to that point: Saltbox One is not just a chat layer over Salesforce metadata. It also carries the company’s playbooks for how Salesforce implementation work should be approached.

The infrastructure bet was to make Vercel the user-facing platform

Saltbox had a small product team and an ambitious product: an enterprise AI agent integrating with Salesforce, Jira, GitHub, Google, Confluence, Zephyr, Fireflies, and other systems. Shane Smyth said the team’s bet was to put the entire user-facing surface on Vercel and treat infrastructure as “a non-decision,” so the product team could spend its cycles on the agent rather than on platform work.

The architecture had three main layers. The client is a Next.js application using React 19, Next.js 15 App Router, shadcn/ui, and document and code canvases. The edge and server layer uses AI SDK v6, streaming responses, Fluid Compute, Blob, and a registry of 48 agent tools. The inference layer uses Vercel AI Gateway to route across six models. Behind that are supporting services including Postgres, Elasticsearch, Redis with BullMQ, GCP, and Heroku workers.

LayerSaltbox One implementation
ClientReact 19, Next.js 15 App Router, shadcn/ui, document and code canvases
Edge / serverAI SDK v6, 48 agent tools, streaming responses, Fluid Compute, Blob
AI inferenceVercel AI Gateway with six models
Backing servicesPostgres, Elasticsearch, Redis / BullMQ, GCP, Heroku workers
The high-level Saltbox One architecture Smyth presented

Smyth described the AI SDK as the framework for the agentic loop and tool use. His recommendation to teams entering agent development was to use it because it provides a way to work across LLMs while also giving enough scaffolding to build the agent loop and expose tools.

For model access, Saltbox chose AI Gateway because the product architecture should not assume one best model for every task. The team wanted to plug in new models as they arrived and choose models per situation without changing the surrounding scaffolding.

Saltbox’s model layer used one key across six models. Models in rotation included Claude Opus 4.6 as the default, Claude Sonnet 4 for tool calls, Claude Haiku 4 for classification, and GPT-5-mini, GPT-4.1, and GPT-4o as fallbacks. The point was less the specific list than the abstraction: the model becomes a flag, not a platform decision.

Stop arguing about those models, start choosing the right models for the right tool on the right occasion.

Shane Smyth

Model routing starts cheap and gets stronger when the task demands it

Saltbox One uses complexity-based routing to decide which model should handle a request. Shane Smyth contrasted a simple hello message with a request to build an entire Salesforce flow. Those should not necessarily go to the same model. The routing system is meant to keep responses fast by default while reserving more capable models for tasks that need them.

The slide presented three tiers. First, Saltbox uses deterministic or regex-based fast paths for high-confidence patterns, which cost nothing and add no latency. Second, ambiguous cases can be classified by an LLM. The slide described a roughly 300 millisecond classifier path and noted Haiku with a zod schema; in the Q&A, Smyth said Haiku was being used for the initial classifier. Third, if classification is disabled or errors, the system falls back to heuristics based on length and signal.

Routing tierPurposeLatency shownModel behavior
Regex fast-pathHigh-confidence direct matches0msRoutes simple cases without an LLM classification call
LLM classifierAmbiguous cases300msUses Haiku initially for classification, then routes to the right model for the task
Heuristic fallbackClassifier disabled or errors0msUses length and signal heuristics, with Opus as the stronger/default path for complex work
Saltbox One’s complexity-based model routing

Smyth called the answer to model routing “ever-evolving.” At the time of the discussion, the first pass looked for deterministic signals in the user’s message, including words like “plan” or “investigate.” If the message did not match those patterns, and if it crossed thresholds such as word count, Saltbox One could pass it to an LLM classifier to assess complexity.

The routing also becomes sticky. Once a chat is classified as complex, the slide said it stays on Opus for 30 minutes rather than reclassifying each message. Smyth described the broader idea as a way to give the user the right quality of response when a conversation is clearly moving into difficult work.

This design matters because Saltbox One is not a single-purpose assistant. A user may ask for a quick status summary, then ask the same system to investigate an org, generate user stories from a meeting, produce Salesforce code, or deploy to a scratch org. Saltbox’s approach is to make model selection part of product behavior rather than a procurement or vendor-lock-in choice.

Streaming is only one runtime for the agent

Users now expect two kinds of agent behavior, according to Shane Smyth. They expect streaming responses so they can see progress in real time, and they also expect the system to handle longer tasks that may take minutes or hours. Saltbox One therefore runs a streaming runtime and a persistent background runtime behind the same chat experience.

The first use cases were on the streaming side. “Generate user stories from this meeting” was one example of a request that can work well in a synchronous in-chat flow. AI SDK, in Smyth’s account, was especially strong as the primitive for that type of streaming interaction.

But the product expanded into tasks that cannot reasonably remain bound to a browser request. Spinning up an entire Salesforce B2B storefront or designing an Experience Cloud site with custom pages can take much longer. For those, Saltbox One lets the agent choose a background runtime while preserving the same user-facing surface, context, and tool set.

The tool registry is central to the shift from assistant to operator. Smyth showed 48 tools organized across Salesforce, document generation, search and retrieval, stories and tests, integrations, and miscellaneous agent capabilities. Every tool is wrapped with logging for progress streaming, timing, authentication needs, and result-size capping.

48
agent tools in the Saltbox One tool registry

The Salesforce tools include querying, validation, deployment, scratch orgs, and form data. The document tools generate scope documents, workshop outputs, status reports, pull request documents, and theme pages. Search and retrieval tools include project-context search, document retrieval, and GitHub code search. Story and test tools generate stories, acceptance criteria, test acceptance criteria, and estimates. Other tools include sandbox, bash, charts, v0, planner, and todos.

Smyth’s phrase for this was that the agent has “hands.” The system is not limited to text generation; it can act through authenticated integrations and controlled tool calls.

Vercel Sandbox gives the agent an ephemeral place to do real work

Sandboxing was one of the major unlocks for Saltbox One. Shane Smyth said Salesforce developers often use the Salesforce CLI to interact with orgs, spin up scratch orgs, work with sandboxes, and deploy changes. That terminal-oriented workflow is difficult to reproduce safely inside a web application unless the agent has a real environment in which to assemble files and run commands.

Saltbox uses Vercel Sandbox MicroVMs to provide that environment. In the flow Smyth described, the agent generates Apex, Lightning Web Components, or configuration; a Vercel Sandbox boots for the task; the Salesforce CLI runs inside the VM; code is pushed to a scratch org; tests run; results stream back; and the VM is destroyed.

Saltbox has two main tools using the official Vercel Sandbox. The first spins up a scratch org using the Salesforce CLI. The second validates or deploys changes into a Salesforce environment, also through the CLI. In Smyth’s view, that is the easiest way to handle Salesforce deployments and related activities.

He also distinguished this from a smaller in-memory sandbox that every conversation uses during each agent turn. That in-memory sandbox helps collect files and let the agent reason about what is present in the conversation. The Vercel Sandbox, by contrast, is the real ephemeral Linux environment used for CLI-based work.

Asked later about the security profile of the Vercel Sandbox, Smyth said the sandboxes start blank. Saltbox loads only the files needed for the task. In the flow example, Saltbox One would take the plan it had assembled, create the directory structure Salesforce expects for deployment, place the generated files there, and deploy from that directory. The sandbox does not receive the Saltbox One codebase; it receives only what Saltbox provides for that task.

v0 is used for Salesforce generation, not only React

Shane Smyth said v0 has been important to Saltbox in two ways. The team uses it while building Saltbox One itself, and it also uses v0 on the services side when building Salesforce work. Saltbox then connected those uses: from inside S1, the agent can pass Salesforce-aware context to v0 and get deployable code back.

In Saltbox One’s workflow, v0 can generate Apex classes and triggers, LWC Lightning components, React UI using shadcn conventions, and Salesforce flows. Smyth’s emphasis was that v0 is not limited to React if it receives the right context. Saltbox One can gather information about the Salesforce org, the requested change, screenshots, Jira stories, and other project context, then provide that context to v0 for code generation.

The loop Smyth described was straightforward. The user describes a change, attaches a screenshot, or links a Jira story. The agent calls v0 with Salesforce-aware context. v0 returns Apex, LWC, React, or flow code. The sandbox picks it up, deploys to a scratch org, and validation results stream back to the user.

In the Q&A, Smyth also discussed the v0 Ambassador Program. He said he would recommend it to people who want to be closer to the community. He described value from access to product team members and from exchanging ideas with other people working in similar areas. Asked what feedback he would give publicly, he pointed to continued progress toward capabilities developers expect from an IDE or local environment. He specifically mentioned sandboxing inside v0 as a major step in that direction.

The demo showed a human approval loop around Salesforce writes

The live demo inside Saltbox One began with a chat interface organized around projects, tools, and attachable context. Projects provide the context for the conversation. Tools represent integrations. Users can add context such as a meeting, a user story, an artifact, a Salesforce org, or a specific Salesforce object.

Shane Smyth first asked Saltbox One to create a Salesforce screen flow that lets a user enter a case comment, creates a CaseComment record linked to the current Case, and shows a confirmation screen saying “Comment added successfully.” The agent retrieved context files and responded that it had checked the instance and found no existing automations on CaseComment, so it did not expect conflicts. It proposed a flow named “Add Case Comment,” with an input variable for the current Case record ID, an entry screen, a create-record step, and a confirmation screen.

The important control point was the review card. After the agent asked for the target org, Smyth approved the default org. Saltbox One produced a card showing what it planned to deploy. Smyth described the card as Saltbox’s way of keeping a human in the loop: the agent can plan, but it does not execute Salesforce changes by itself. The user sees what will be deployed and clicks “Approve & Execute” before any write action.

The deployment attempt failed during the demo. The UI returned: “Flow definition validation failed: recordCreate ‘Create_Case_Comment’ requires an object when using assignments mode.” Smyth used the failure to explain the retry and revision loop. The user can go back and forth with the agent, resolve the issue, retry, and, if a deployment has occurred, revert to a previous version.

Demo momentWhat it illustrated
Create a Case Comment screen flowThe agent can plan a Salesforce change from a natural-language request
Review card before executionThe user must approve the proposed deployment before the write action
Validation failureFailures stay in the workflow so the user can revise, retry, or later revert a deployed change
The first demo centered on controlled execution rather than unattended automation

The same chat surface handled planning, org review, and story export

Smyth then showed how Saltbox One handles analysis work. In a previous conversation using a demo brand called Halston, Shane Smyth had taken a meeting about a new software subscription product line and asked where to start based on the current Salesforce org and out-of-the-box B2B Commerce functionality. The output summarized meeting requirements, described the existing org, generated a Mermaid diagram for the data model, and recommended phased implementation steps with key questions.

The org-review demo focused on technical debt. Smyth asked the agent to review the architecture of an org. The output assessed licenses, installed packages, active integrations, storage, standard and custom objects, and complexity hotspots. One finding flagged the Opportunity object for “automation overload,” with 19 automations firing on record changes: four Apex triggers and 15 flows. Another section identified field documentation debt, including 26 custom fields across core objects missing descriptions and help text.

Finally, Smyth showed user story generation and export. He dropped in a Confluence page that had itself been the output of a previous conversation, then asked Saltbox One to create user stories for phase one. The system, in his description, was combining the document, project context, Salesforce environment, Saltbox best practices, and the broader Salesforce ecosystem to break the phase into manageable stories. The UI then showed draft user stories with descriptions, acceptance criteria, priorities, and a “Send to Asana” action.

The thread across these examples was continuity. The agent could move from a meeting transcript to a phased plan, from an org scan to technical-debt recommendations, and from a planning document to user stories without changing surfaces. For Saltbox, that shared context is part of the product: the same chat can hold advisory work, implementation planning, and delivery artifacts.

Permissions are user-based, with extra approval for Salesforce writes

Jacob Paris asked how agent permissions work when the system connects to Salesforce, Confluence, and other enterprise systems. He framed the issue as tricky because of enterprise SSO and multiple integrations: does S1 have its own app-level permissions, or does it act as the person asking the question?

Shane Smyth said Saltbox debated that question and landed on user-based permissions. When a user comes into Saltbox One, they authenticate as themselves. Actions in Salesforce and other applications use that user’s OAuth or API key, depending on the platform. Smyth said this gives Saltbox the control that the user who performed the action is the user recorded for that action.

Salesforce has another layer. Each connected Salesforce instance starts as read-only. The user can switch it to write permission, but even then, any Salesforce changes still show the approval screen. For Salesforce write actions, the user explicitly approves the action, and Smyth’s permissions model is that the action runs through the authenticated user rather than a separate shared actor.

Pricing is less settled. Asked whether Saltbox uses usage-based or seat-based pricing, Smyth said Saltbox was still finalizing the model while rolling the product out to customers. At the time, Saltbox was starting with user-seat-based pricing, with tier limits for conversations, data, and interactions. He added that the token-based tooling ecosystem changes quickly, so the pricing approach may need to adapt.

The frontier, in your inbox tomorrow at 08:00.

Sign up free. Pick the industry Briefs you want. Tomorrow morning, they land. No credit card.

Sign up free