Crypto

The Signals Were Real: InDecision Framework Hits 93% Win Rate in Live Markets

The bot was cycling every 2 minutes — its own watchdog killing it every 129 seconds. The signals inside were perfect: 86–100/100, 92% accuracy, calling direction while the market priced uncertainty at 50/50. One coding session fixed the infrastructure. The rest is on-chain.

February 26, 2026
10 min read
#indecision-framework#polymarket#ai-trading
The Signals Were Real: InDecision Framework Hits 93% Win Rate in Live Markets
Share

The bot was killing itself every two minutes.

Not metaphorically. Literally: it would start, begin evaluating markets, and then its own self-healing watchdog would fire a SIGTERM 129 seconds into the loop. The wrapper catches the exit, waits 10 seconds, restarts. Repeat, indefinitely. Every cycle the InDecision Framework was scoring XRP 86/100 BULLISH, ETH 89/100 STRONG UP, BTC calling clear direction while the market priced the same move at 50/50 — and the bot would die before placing a single order.

The infrastructure was eating itself. The signals were never wrong.

One coding session. Three precision fixes. By end of day: 55 trades, 51 wins, +$378.64, 92% session win rate, 90.2% rolling over 7 days.

The signals were real. They just needed the infrastructure to be as precise as they were.

MISSION CONTROL · POLYMARKET BOT v4.0
LIVE · CONSERVATIVE
SESSION WIN RATE
93%
51W · 4L · 55 trades
SESSION P/L
+$378
Feb 26 · Kelly-adjusted bets
WIN STREAK
7W
consecutive · active
7-DAY ROLLING
TRADES
69
55W · 6L · 8BE
WIN RATE
90.2%
rolling 7d
PROFIT
+$405
net P/L
ROI
+63.6%
on wagered

What InDecision Actually Is

The name is deliberately counterintuitive. InDecision doesn't mean uncertain — it means precisely calibrated about uncertainty. It's a 6-factor scoring engine built to answer one question: does this market have a conviction gap right now?

Most markets are fairly priced most of the time. The edge isn't being smarter than the market. The edge is identifying moments when the market's own pricing reflects less conviction than the underlying data warrants — and positioning on the right side before the price catches up.

The framework runs five scoring modules against real-time data, assigning weights to both bull and bear cases simultaneously:

THE INDECISION BRAIN
Multi-Factor Scoring Architecture
Pattern Engine
double_bottom · wedge · triangles
Volume Analysis
relative vol vs 20-period avg
Timeframe Alignment
5m → 15m → 1h → Daily
Technical Indicators
RSI · MACD · Bollinger Bands · ADX
Window Timing
time-in-window coefficient
→→
BULL ACCUMULATOR
bull_total score
BEAR ACCUMULATOR
bear_total score
↓↓
CONVICTION SPREAD
spread = bull_total − bear_total
conviction_pct = max(bull, bear)
OUTPUT BIAS
BULLISH
spread > 20 · strong
NEUTRAL BULLISH
spread 10–20 · moderate
NEUTRAL
spread < 10 · no edge
NEUTRAL BEARISH
spread 10–20 · moderate
BEARISH
spread > 20 · strong

Each factor contributes to either the bull accumulator, the bear accumulator, or both — weighted by direction strength. The spread between accumulators becomes the conviction score. High spread means clear directional edge. Low spread means the data is ambiguous. The framework's output is binary in label but continuous in precision: it tells you how much the competing forces disagree, not just which side is louder.

This is the difference between InDecision and most signal frameworks. It doesn't just output a direction. It outputs a confidence architecture.


The Dual-Feed Architecture

InDecision doesn't run a single engine. It runs two independent analysis pipelines calibrated for different market timeframes.

IntraBiasFeed fires every 5 minutes, running the IntraCaseAggregator. Optimized for sub-hourly signals: RSI divergence, MACD cross velocity, Bollinger Band compression. When the bot evaluates a 5m or 15m market window, this feed provides the directional context — real-time, fresh, calibrated for short-window binary markets.

DailyBiasFeed runs every 4 hours via the DualCaseAggregator. Pattern-focused, volume-weighted, timeframe-aligned against daily structure. It's the macro lens — the trend that 15m noise either confirms or contradicts. When the daily feed and intraday feed agree on direction, the InDecision score to PolyEdge can jump 20+ points. When they conflict, the system stays NEUTRAL by design.

The dual-feed architecture was built for one purpose: never trade from stale context. A 4-hour-old signal injected into a 5-minute window is noise. The intraday feed eliminates that category of error entirely.

INDECISION SIGNAL CONSOLE
InDecision Framework · Asset Conviction Snapshot · Feb 26
7 ASSETS · INTRA FEED
SOL
BULLISH
bull 60.2 · bear 35.7 · spread 24.6
+25
PolyEdge pts
AVAX
BULLISH
bull 58.2 · bear 39.4 · spread 18.8
+22
PolyEdge pts
DOGE
NEUTRAL BULLISH
bull 55.3 · bear 40.4 · spread 14.9
+18
PolyEdge pts
LINK
NEUTRAL BULLISH
bull 52.6 · bear 43.3 · spread 9.4
+10
PolyEdge pts
XRP
NEUTRAL BULLISH
bull 51.3 · bear 44.6 · spread 6.7
+8
PolyEdge pts
BTC
NEUTRAL
bull 45 · bear 49.3 · spread 4.3
0
PolyEdge pts
ETH
NEUTRAL BEARISH
bull 43.8 · bear 50.6 · spread 6.8 · counter-trend penalty when betting UP
−6
PolyEdge pts
Injection scale: spread > 20 = +25pts · spread 10–20 = +15–22pts · spread < 10 = +6–10pts · NEUTRAL = 0 · counter-trend = −10pts

How PolyEdge Uses InDecision

InDecision isn't a tiebreaker in the PolyEdge system. It's the backbone — the factor with the highest possible score impact (-10 to +25 points depending on alignment and conviction) in a 0-to-100 scoring framework.

POLYEDGE EVALUATION PIPELINE
End-to-End Decision Architecture · 16 markets per loop
Market Scanner
Polymarket CLOB API · polls every loop
Active Markets
8 assets × 2 timeframes = 16 markets
Pre-flight Filter
skip window < 60s · startup grace 180s
InDecision Intraday
IntraCaseAggregator · 5m refresh
InDecision Daily
DualCaseAggregator · 4h refresh
TA Engine
RSI · MACD · Bollinger · Binance
Pattern Engine
formations · chart structure
Per-Market Eval
PolyEdge Score 0–100
Momentum Score 0–100
best score wins
STRONG ≥ 90
Execute · Kelly bet size
MODERATE ≥ 80
Execute · Kelly bet size
WEAK < 80
Skip · no edge

When InDecision is BULLISH with moderate-to-strong conviction, it injects +15 to +25 points into the PolyEdge score. When the market is NEUTRAL (spread under 10%), the injection is zero. When InDecision is BEARISH and PolyEdge wants to go UP, the score takes a -10 point hit. The system is designed to disagree with itself when the data conflicts. That self-correction is the entire thesis.


Today's Session: Three Fixes That Changed Everything

By this point in the project's life, the analytical engine was mature. The issue today was infrastructure — the kind of problem that only surfaces when a system scales past its original assumptions.

Fix 1: Break-Even Categorization

The first fix was subtle but changed the integrity of every metric in the system.

In Polymarket binary markets, you can be directionally correct and still lose money after fees. Buy a UP token at 87¢, it resolves UP, your gross win is 13¢, fees are 14¢ — you lost money on a correct prediction. The database records this as outcome='win' because the direction was right. But every stats calculation was counting it in the win rate numerator.

The fix: break-even trades are their own category. A break-even is outcome='win' AND pnl_net ≤ 0. Win rate now calculates as profitable_wins / (profitable_wins + true_losses). Break-evens excluded from both numerator and denominator.

INSIGHT

Why this matters for strategy: A growing break-even rate is a signal that the bot is entering positions at market extremes — buying UP tokens already priced at 85¢+ where the fee floor eliminates all margin. The distinction between "correct direction, wrong entry price" and "wrong direction" is analytically important. They require different fixes.

The previous win rate was inflated. The real win rate is 92%. That number is now trustworthy.

TRADE HISTORY
55 total · Feb 26, 2026 · live mode
ASSET
DIR
SCORE
BET
P/L
RESULT
BTC 5m
↓ DOWN
94
$15.00
+$15.30
WIN
SOL 15m
↑ UP
98
$11.25
+$14.03
WIN
ETH 15m
↑ UP
101
$11.25
+$11.95
WIN
SOL 5m
↑ UP
98
$11.25
+$12.19
WIN
BTC 5m
↑ UP
93
$11.25
+$10.59
WIN
XRP 5m
↑ UP
89
$11.25
+$11.71
WIN
BTC 15m
↓ DOWN
109
$11.25
+$8.32
WIN
ETH 15m
↓ DOWN
97
$5.62
+$4.15
WIN
XRP 5m
↑ UP
92
$1.54
+$1.57
WIN
XRP 5m
↑ UP
100
$11.25
−$11.25
LOSS
showing 10 of 55 trades
51W · 4L · 93% win rate

Fix 2: Kelly Bet Sizing

The Kelly criterion — the mathematically optimal bet sizing formula derived from information theory — was supposed to be live. It wasn't. A dynamic post-processing block in the execution path was running after Kelly's output and overwriting it with a flat conviction multiplier.

With Kelly active and the correct bankroll wired to live wallet balance, position sizes now scale proportionally to edge strength. Strong signals bet more. Moderate signals bet less. The system allocates capital the way every quantitative trader knows it should be allocated — not uniformly.

Fix 3: The Watchdog Architecture

This is the story the logs tell best.

At 21:34:41, the Coinbase price feed went down. The bot correctly switched to Binance WebSocket fallback — exactly as designed. What wasn't designed for was what that failover did to the evaluation loop timing.

The EventLoopWatchdog is a daemon thread that monitors the asyncio event loop. If beat() isn't called within 120 seconds, it assumes the loop is hung and sends SIGTERM. The wrapper catches the exit, waits 10 seconds, restarts. Clean self-healing architecture — the design was correct. The implementation had one flaw.

beat() was called once per outer loop iteration. Then the inner loop evaluated 16 markets sequentially. Each evaluation calls the TA engine (10-second Binance timeout) and the pattern engine (also 10 seconds via shared fetch_candles). During the Coinbase→Binance failover, REST API responses were slower. 16 markets × up to 20 seconds each = up to 320 seconds with no heartbeat.

The watchdog fired at 129 seconds. Correct by spec. Wrong by intent.

WATCHDOG FIX — BEFORE / AFTER
One line moved. Semantics changed from per-loop to per-market.
BEFORE (BROKEN)
while self._running:
watchdog.beat() ← once
for market in 16 markets:
evaluate_market()
# ~20s per market
# 16 × 20s = 320s
# no beat here
 
→ 129s elapsed
→ WATCHDOG FIRES
→ SIGTERM sent
→ bot restarts
AFTER (FIXED)
while self._running:
for market in 16 markets:
watchdog.beat() ← per market
evaluate_market()
# max 20s per market
# 120s window resets
# each iteration
 
→ 20s max per market
→ watchdog never fires
→ bot stays stable
→ signals keep flowing
The 120s timeout was never too aggressive. The beat() placement was wrong. One line fixed it.

The fix: move self._watchdog.beat() inside the for market in active_markets: loop, right before await self._evaluate_market(market). The timeout didn't change. The semantics did. 120 seconds now means a single market evaluation should never take 120 seconds — which is the correct invariant. The previous semantics were 16 market evaluations should collectively never take 120 seconds — which is mathematically impossible with 16 markets and 10-second API timeouts.

No watchdog fires since 22:28 PM. The bot has been running clean for hours.


The Numbers

These aren't backtests. These are live trades, real USDC, on-chain settlements on Polygon.

Session Win Rate
93%
51 wins · 4 losses · 55 total trades · Feb 26, 2026
Session P&L
+$378.64
Live mode · conservative sizing · $5 base bet Kelly-adjusted
7-Day Rolling Win Rate
90.2%
69 trades · 55W / 6L / 0 BE · +$405.38 total · +63.6% ROI

The by-conviction breakdown is the proof the framework works as designed:

  • Strong conviction (score ≥ 90): 42 trades — 39 wins — 92.8% win rate
  • Moderate conviction (score 80–89): 11 trades — 10 wins — 90.9% win rate
  • Below threshold: skipped — the framework doesn't manufacture edge

The filtering is the product. The system doesn't trade uncertainty. It waits for a measurable conviction gap, takes it, closes it.

The real tell: on the 7-day view, the InDecision intraday feed is calling BULLISH or NEUTRAL_BULLISH across SOL, XRP, AVAX, DOGE, and LINK simultaneously. Strong spread across multiple correlated assets in the same direction is a regime signal, not noise. The framework reads that as a structural edge window — and the results confirm it.


Why This Architecture Works

There's a thesis embedded in this system that most retail traders never reach because they're focused on the wrong layer.

InDecision wasn't built to predict price. It was built to measure the gap between what the data suggests and what the market is pricing. These are different problems. Price prediction is hard — you're competing against every participant, human and machine, simultaneously. Conviction gap measurement is harder to commoditize because it requires a multi-factor, real-time evaluation architecture that most participants don't have and won't build.

The DualCaseAggregator — the engine powering the daily feed — was itself a critical fix from a week ago. Before it, when BTC was showing 22.7% conviction, the bot had a conviction drought: no signals strong enough to trade even when price structure was clear. The fix implemented a dual competing case model that forces the engine to quantify both the bull and bear cases simultaneously, then compute their divergence. BTC went from 22.7% → 50.5% conviction immediately. Not because the market changed. Because the measurement got more precise.

That's the through-line of today's session. Break-even categorization made the win rate metric more precise. Kelly sizing made the capital allocation more precise. The watchdog fix made the self-healing more precise. None of these touched the InDecision scoring engine itself — because the analytical engine was already right.

The infrastructure needed to be as precise as the signals it was carrying.


What This Project Is

This is personal use infrastructure. I'm not selling signals. I'm not running a fund. There's no reason to monetize something that prints while I'm working, sleeping, and building the other things I care about.

What's interesting about this project isn't the P&L. It's that the analytical frameworks driving it — multi-factor conviction scoring, dual competing models, self-healing daemon architecture, precision break-even categorization — are all directly applicable to how I think about engineering systems, team dynamics, and competitive intelligence.

The InDecision Framework started as a mental model for reading market structure. It became a codified scoring engine. It's becoming something else.

The signals were real from the beginning. They just needed the infrastructure to match their precision.

Explore the Invictus Labs Ecosystem

// Join the Network

Follow the Signal

If this was useful, follow along. Daily intelligence across AI, crypto, and strategy — before the mainstream catches on.

No spam. Unsubscribe anytime.

Share
// More SignalsAll Posts →