Crypto

From Framework to Signal: Building the InDecision API

The InDecision Framework ran for 7 years as a closed system — Python scorers feeding Discord and a trading bot. Turning it into a public API forced architectural decisions that changed how I think about signal infrastructure.

February 26, 2026
9 min read
#indecision-framework#api-design#signal-architecture
From Framework to Signal: Building the InDecision API
Share

The InDecision Framework started as a spreadsheet in 2019. Six weighted factors, manual data entry, a conviction number I calculated by hand before each trading session. Over seven years it evolved into a Python engine with 527 tests, 90% coverage, five independent scorers, two adjustment layers, and a DualCaseAggregator that builds independent bull and bear cases for every analysis. It runs on a Mac Mini 24/7, feeds a Polymarket trading bot that places live bets, and posts daily analysis to a Discord community.

For all of that time, every consumer of InDecision lived inside the same system. The Discord bot imported the aggregator. The trading bot imported the aggregator. Nothing external could access the output because there was no external interface. The engine existed. The API did not.

Building the API changed how I think about what a signal actually is.

The Problem With Internal-Only Signals

When the only consumer of your analysis engine is your own code, you make a specific set of compromises that feel invisible. The aggregator returns a Python dictionary. The Discord bot knows the shape of that dictionary. The trading bot knows the shape of that dictionary. Nobody else needs to know, so you don't document it, you don't version it, and you don't think about what happens when the schema changes.

I had five systems consuming InDecision output by February 2026: the DualCaseAggregator for swing analysis, the IntraCaseAggregator for intraday, the Polymarket trading bot, the Discord daily briefing, and the scorecard image generator. Each one accessed the data differently. The swing aggregator returned a dict with summary.bias and summary.conviction_pct. The intra aggregator used the same pattern but with different factor keys. The scorecard generator expected a flat structure with factor names as top-level keys.

INSIGHT

When every consumer speaks the internal language of your system, you don't have a signal. You have a tightly coupled implementation detail that happens to produce useful numbers. The difference becomes obvious the moment an external consumer asks for access.

This is the distinction I didn't appreciate until I started building the API: a signal is a contract. It has a defined shape, a versioned schema, bounded values, and deterministic behavior given the same inputs. What I had was an engine that produced analysis results. What I needed was a stable interface that turned those results into something a stranger's code could consume without reading my source.

The Dual-Case Decision

The most consequential design decision in the API was exposing the dual-case architecture, not just the winning bias.

The InDecision engine doesn't produce a single score through a linear formula. It builds a bull case and a bear case independently, then picks the winner. The spread between the two cases is the conviction signal. A 74% BEARISH with a spread of 48 (bull 26, bear 74) is a structurally different signal than a 62% BULLISH with a spread of 24 (bull 62, bear 38). The first has overwhelming factor alignment. The second has a narrow advantage with real counter-evidence.

Dual-Case Architecture
2 cases
independent bull + bear scoring, spread determines conviction quality

The temptation in API design is to simplify. Return the bias and the conviction. Let the consumer decide. But the spread is where the decision quality lives. A consumer who only sees "BEARISH 74%" treats that the same as "BULLISH 74%" — both look like high conviction. A consumer who sees the spread knows one is aligned consensus and the other is a narrow win over active opposition.

The API returns bull_case, bear_case, conviction_pct, and spread as top-level fields. The per-factor breakdown returns each factor's independent score and directional signal. The full state is exposed because the full state is what makes the signal useful.

{
  "asset": "BTC/USD",
  "timestamp": "2026-02-26T14:30:00Z",
  "bias": "BEARISH",
  "conviction_pct": 74.3,
  "bull_case": 25.7,
  "bear_case": 74.3,
  "spread": 48.6,
  "factors": {
    "daily_pattern":       { "score": 22.5, "max": 30, "signal": "BEARISH" },
    "volume":              { "score": 19.8, "max": 25, "signal": "BEARISH" },
    "timeframe_alignment": { "score": 14.0, "max": 20, "signal": "BEARISH" },
    "technical":           { "score": 10.5, "max": 15, "signal": "NEUTRAL" },
    "market_timing":       { "score": 7.5,  "max": 10, "signal": "BEARISH" }
  },
  "risk_context": { "gate": false, "flags": [] },
  "meta": { "engine_version": "2.1.0", "exchange": "coinbase" }
}

What Had to Change in the Engine

The engine was built to run analysis, not to serve it. That distinction required three architectural changes.

Scorer output normalization. Each scorer was returning data in slightly different shapes. The pattern scorer returned a tuple of (score, details_dict) where the details included pattern names. The volume scorer returned the same tuple shape but with different keys. The timeframe scorer included nested exchange-specific data. For internal use, this was fine — each consumer knew what to expect. For an API with a schema contract, every scorer needed to return a consistent structure: a numeric score, its maximum possible value, and a directional signal (BULLISH, BEARISH, or NEUTRAL).

Stateless analysis path. The existing aggregator initialized database connections, stored results, printed to stdout, and maintained state between runs. An API endpoint needs a clean analysis path — take inputs, produce outputs, hold nothing. Extracting a stateless compute() method from the aggregator that returns the full analysis dict without side effects was the biggest refactor.

SIGNAL

The same engine that produces the API output runs the live Polymarket trading bot. The accuracy data — 82.5% on swing, 75% on high-conviction intraday — comes from the same scoring pipeline the API serves.

Risk context as a first-class field. Risk context was a modifier applied inside the aggregator but not exposed in the output dict. It was a gate — when a high-impact macro event was detected, conviction was capped. But the previous consumers (all internal) just saw the capped conviction. They didn't know why it was capped. The API exposes risk_context.gate as a boolean and risk_context.flags as an array of active risk conditions. A consuming system can now distinguish between "low conviction because the factors disagree" and "low conviction because a macro event is overriding the technical read."

What the API Makes Possible

The use cases that excited me most weren't the obvious ones.

A Discord community bot that formats the conviction score into an embed — that's straightforward. A dashboard that renders a radar chart of factor scores — useful, predictable. These are the first things people build with signal APIs.

The second wave is more interesting. An alert system that triggers only on state transitions — NEUTRAL to BEARISH at 70%+ conviction — rather than polling for the current state. A multi-asset correlation monitor that tracks whether BTC and ETH signals are diverging or converging. A backtesting harness that replays historical API responses against a custom trading strategy without needing to run the engine locally.

The most powerful use case is the one I didn't design for: combining InDecision signals with other data sources. The API returns a structured conviction read. Someone else's system might combine that with on-chain flow data, options market positioning, or social sentiment scoring. InDecision becomes one input in a larger analytical stack — which is exactly how professional quantitative analysis works. No single model is the whole picture. The model that exposes its components transparently gets integrated. The model that returns a black-box number gets replaced.

Lessons From the Build

Three things I'd tell anyone turning an internal analysis engine into a public API.

Version the schema on day one. The meta.engine_version field exists because I know the scoring weights, adjustment layers, and factor definitions will change. A consumer needs to know whether the 74.3% conviction they're seeing comes from the same model that produced 82.5% accuracy — or a newer version with different characteristics. Versioning after the fact is painful. Versioning at launch is free.

Expose the disagreement, not just the conclusion. The dual-case spread and per-factor signals are what make the API useful beyond a simple indicator. If I'd shipped "bias + conviction" and nothing else, the API would be a convenience layer over a Discord message. The factor breakdown makes it infrastructure.

Don't optimize the output for your first consumer. The Discord bot and the Polymarket trading bot have very different consumption patterns. Building the API around either one would have constrained the other. The API returns the full analysis state. Each consumer extracts what it needs.

The InDecision Framework was designed to remove emotion from market analysis. The API is designed to remove the human from the delivery pipeline. The analytical rigor stays the same. The attack surface for cognitive bias drops to zero. A machine reading a JSON response doesn't feel fear, doesn't chase momentum, and doesn't override the model because the chart "looks bullish." It reads the conviction score, applies its rules, and acts — or doesn't.

That's what signal infrastructure is supposed to enable. Not better predictions. Better decision architecture.

Explore the Invictus Labs Ecosystem

// Join the Network

Follow the Signal

If this was useful, follow along. Daily intelligence across AI, crypto, and strategy — before the mainstream catches on.

No spam. Unsubscribe anytime.

Share
// More SignalsAll Posts →