Smart Order Capture
Product

How we landed on $9 a month: pricing for a product that says no

Per-grab pricing was the obvious model and the wrong one. The four-month walk we took to a flat $9 tier, the three experiments along the way that failed, and the customers we deliberately price out.

The smartordercapture teamFoundersMay 14, 20268 min read
A small stack of plain paper price tags arranged on a neutral surface — the simplest possible billing artifact.

Pricing is the question we got wrong twice and only stopped getting wrong because we wrote down what we actually wanted the product to do — and noticed that the most obvious pricing model was working against us.

This is the story of how we landed on $9 a month flat, with no per-grab fees, no per-seat fees, and a free tier that's deliberately generous. There are also three pricing experiments we ran along the way that didn't work, and a category of customer we now actively price out.

The obvious model

When you sell a tool that automates phone interactions, the first pricing model anyone proposes is per-action: a fraction of a cent per UI tap, a few cents per workflow run. It's how Zapier prices, it's how Twilio prices, it's how essentially every "we do a thing on your behalf" infrastructure tool prices. The numbers are easy to model, the unit economics are clean, and the sales motion writes itself.

We modeled it. At reasonable assumptions — 200 actions per workflow, $0.002 per action — a power user paying $50–100 a month would be funding the entire free tier. The math worked. We mocked the billing UI. We were about to ship it.

Then someone on the team asked a question we hadn't been able to ignore but had been trying to: what kind of customer pays us the most money under this model?

The answer is the customer who fires the most actions. Which is the customer running unattended, high-frequency, machine-driven loops against other apps. Which is precisely the customer we'd spent two years building the denylist to keep out. The pricing model would have aligned our revenue with the use cases we'd publicly refused to support, and the people who pay us the most would have been the people we most distrusted.

That's not a values problem. That's a strategic problem. A company whose revenue is concentrated in customers it doesn't want spends every quarter relitigating its own product positioning, and eventually the loud minority of high-volume users wins the internal argument about what to ship next. We've watched it happen to peers. We decided not to set ourselves up for it.

Experiment one: capped per-grab

Our first attempt to escape per-grab pricing was to keep it, but cap it. The first 5,000 actions per month were free, the next 5,000 were a flat $5, and after that you got rate-limited. This satisfied the "we don't want unbounded usage" intuition but didn't solve the actual problem: the cap was still in the same currency as the abuse, and the customer who hit it was still the customer using the tool for what it wasn't meant for.

Worse, the cap punished a category of customer we did want: small businesses running a daily reconciliation workflow with several hundred steps in it. They'd hit the cap by the third week of the month and have to either upgrade for reasons that felt arbitrary to them or stop using the product for the last week. We ran this for six weeks. Cancellation rate among legitimate-use customers was about 4× higher than under no-billing-yet conditions. We pulled it.

Experiment two: per-workflow tiers

Next we tried billing per active workflow. Three workflows free, ten for $5/month, unlimited for $15/month. The intuition was that the unit of value is the workflow, not the action, so the unit of billing should be too.

The metric moved the wrong way. Users on the free tier created exactly three workflows and then started consolidating logic into giant single workflows with deeply nested branches to avoid hitting the limit. The workflows got harder to debug, our support volume went up, and one user filed a bug that turned out to be "my single workflow has 87 branches and the trace UI is unusable". Yes. We could see that.

The lesson there was that any quota expressed in user-visible units changes the shape of what users build. A per-workflow quota optimizes for fewer, larger workflows. A per-action quota optimizes for fewer, shorter actions. Both are the wrong optimization target, because both are optimizing for the billing metric rather than for the user's actual task.

Experiment three: usage-based but in time

The third attempt tried a different unit entirely: cumulative engine runtime. Twenty minutes of total workflow execution time per month free, then $0.001 per second. This was philosophically the cleanest model — it taxed the resource we actually cared about (battery, CPU, screen-on time) and was independent of how the user structured their workflows.

It also turned out to be completely uninterpretable to users. A pricing dashboard that says "you have used 14 minutes 22 seconds of your 20 minutes" requires the user to translate that into "should I add this new workflow or not?", and the translation is impossible without simulating the workflow first. We watched users avoid creating workflows because they didn't know how much "time" the workflow would cost them. The free tier was a behavioral disincentive, which is the opposite of what a free tier should be.

What we landed on

$9 a month, flat. Three workflows free, unlimited paid. All triggers, all actions, all execution local. No per-action billing, no time billing, no per-seat billing, no upsells for "advanced" actions. The price is the price, and it doesn't change based on what you build.

The reasons:

  • It's interpretable. The user knows what they're paying and what they're getting before they build anything. There is no calculator on the pricing page. There doesn't need to be.
  • It misaligns us with abuse. A high-volume scraping bot pays us exactly the same as a small business running a daily report. There is no revenue incentive to ignore the abuse use case, which means engineering and product can spend their attention on the customers we like.
  • It rewards us for product quality, not engagement. Under per-grab pricing, a customer running fewer actions is a customer paying us less. Under flat pricing, a customer who finds a faster way to do their task is just as good a customer. We can ship efficiency improvements without internally arguing about whether we're "leaving money on the table."
  • It lets the free tier be honest. Three workflows is enough for a real user to validate the product. They don't have to hit a paywall to discover whether it works for them; they hit the paywall when they've decided it does work and want to do more of it.

The customers this priced out

Two categories.

First, enterprise buyers who wanted per-seat pricing. We get an email every few weeks asking what our team plan is. We don't have one. The honest answer is that smartordercapture isn't an enterprise product yet — we don't have SSO, we don't have audit-log export, we don't have a SOC 2 report. A team can absolutely buy nine seats at $9 and a few do, but there's no per-seat dashboard and there won't be one until the rest of the enterprise story is real. Until then, the $9 plan is the plan, and the enterprise conversations end politely.

Second, the volume-arbitrage customer. Someone who wanted to drive 50,000 workflow executions a day was paying us $9 and laughing about it, until they hit the per-IP and per-account rate limits we shipped at the same time as the flat pricing. The rate limits aren't a billing tool; they're an abuse-mitigation tool. If you've designed your business model around the assumption that a $9/month tool can replace a $5,000/month dedicated automation platform, the rate limits will surface that mismatch quickly, and your account will be flagged for review.

Both of those are features, not bugs. The pricing model is the place where you encode who you want as customers, and we'd rather lose the wrong customers at signup than have them as our most profitable accounts.

The objection we hear most

"You're leaving money on the table." Always from someone with no skin in the game, usually with a follow-up suggestion to add a Pro tier at $29 with "advanced features" gated behind it.

We probably are leaving money on the table by some definition. A pricing consultant would absolutely find a way to extract another $10 a month from our top decile. The reason we don't is that doing so would require either (a) shipping arbitrary feature differentiation between tiers, which complicates every product decision downstream, or (b) introducing usage-based components, which we already explained why we don't want. Both routes lead back to incentives that pull product in directions we don't want it pulled. The cost of leaving that money on the table is the price of keeping the team's attention on the work we actually believe in.

What changes if we get traction

We get this question a lot, mostly from investors. The honest answer is: not much.

We aren't planning to raise the price. The cost of running a workflow on a user's own phone is roughly zero to us; the cost of running the marketing site and the events ingestion pipeline scales sub-linearly with users. There's no point at which we suddenly need 3× revenue per user to keep the lights on.

We aren't planning to add usage-based components. The whole point of the flat tier is to be the boring answer to "what does this cost". Adding a usage meter later would be a betrayal of that promise.

We will probably add an enterprise tier when SSO and the rest are real. That tier will be priced however it needs to be priced to make the enterprise sales motion work, which is its own conversation. It won't change the $9 plan.

The most likely change is in the other direction: making the free tier even more generous as the marginal cost of supporting a free user drops further. We'd rather have a million people using the free tier and recommending it to one paying coworker than a tighter funnel that converts more aggressively.

Pricing is the part of the product that's hardest to A/B test because the cohorts diverge in behavior almost immediately. The version that ran four months of experiments to get here is the version that's stuck. We don't expect to revisit it for a long time.