Engineering

We Thought Enabling 5-Minute Trading Was One Toggle. It Was Six Bugs.

FILTER_5M_DISABLED = False. One line. Should've been the whole story. Instead it kicked off a six-bug root cause chain that exposed every assumption we had about how our trading bot actually found markets — and taught us the most important rule in API integration.

February 26, 2026
8 min read
#engineering#trading-bot#debugging
We Thought Enabling 5-Minute Trading Was One Toggle. It Was Six Bugs.
Share

One line of code.

FILTER_5M_DISABLED = False

That was supposed to be the entire story. The 5-minute timeframe had been disabled since Day 1 — a kill switch installed during our first retro when the bot's accuracy wasn't good enough to trust sub-15-minute windows. The InDecision framework had matured since then. 82.5%+ accuracy on 15m. Strong track record. Time to expand.

Flip the flag. Restart the bot. Done.

Except it wasn't done. Not even close.

What followed was a six-bug root cause chain that exposed every wrong assumption we had about how the scanner actually found markets — and produced one of the cleanest debugging sessions I've had this year.

The Kill Switch Was Always Two Things

The 5m filter wasn't just a toggle. It was two separate constraints working together:

  1. FILTER_5M_DISABLED = True in trader.py — the code-level kill switch
  2. ENABLED_TIMEFRAMES=15m in .env — the config that controlled which timeframes the scanner even looked for

Flipping one without the other is a no-op. We needed both.

# .env
ENABLED_TIMEFRAMES=5m,15m
# trader.py
FILTER_5M_DISABLED = False  # Re-enabled Feb 2026 — bot accuracy improved significantly

Two changes. Done. Or so we thought.

After the restart: zero 5m market slots populated. The scanner was running, looking for 5m slots, finding nothing.

That's not a config issue. That's a data issue.

Bug #1 — The Wrong Series IDs

The market scanner uses two known Polymarket series IDs — 10684 and 10685 — which historically had BTC and ETH intraday markets hanging off them. The scanner would query those series, pull the active events, and slot them into the asset/timeframe matrix.

The problem: those series IDs pointed to December 2025 burst events. The current 5-minute markets running right now in February 2026 had no series ID at all.

Polymarket changed how they issue intraday markets. Instead of series, they're now standalone events — individual entries in the market feed with no parent series, just a crypto tag and a window title like "BTC Up or Down? 12:35AM-12:40AM ET".

Our scanner was querying two specific series IDs and finding nothing because nothing was there anymore. All current-day 5m markets lived in a completely different part of the API.

The fix: build _scan_standalone_events() — a second scan path that queries the events feed directly:

async def _scan_standalone_events(self):
    """Scan for current 5m/15m markets that exist as standalone events (no seriesId)."""
    needed = set()
    for asset in self.enabled_assets:
        for tf in self.enabled_timeframes:
            key = f"{asset}_{tf}"
            existing = self._markets.get(key)
            # Always rescan standalone slots so expired windows roll forward
            if existing is None or existing.series_id == 0:
                needed.add((asset, tf))

    if not needed:
        return

    events = await self._fetch_events(
        tag_slug="crypto",
        closed=False,
        limit=500,
    )
    # ... slot matching logic
INSIGHT

When an API changes its data model — from series-based to standalone events — your scanner has to change too. If you're only querying known IDs and getting nothing, the data didn't disappear. It moved.

Bug #2 — The Silent Truncation

The new standalone scan hit the events feed and still came back with nothing useful. All the events it returned were from December 2025 — expired markets, old windows, nothing current.

The culprit: limit=200.

The events feed has hundreds of entries going back through historical data. When you query with limit=200, you get the first 200 rows — which happened to be all historical events from the December burst series. The current February 2026 markets were at position 201+. Never returned. No error. No warning. Just silently cut off.

# Before — silently truncated
events = await self._fetch_events(limit=200)

# After — enough headroom for current markets
events = await self._fetch_events(limit=500)
Events at position 200
Dec 2025
All historical. Current Feb 2026 markets: position 201+

Rule: When a paginated feed returns zero useful results, the first question isn't "do the results exist?" — it's "am I requesting enough of them?"

Bug #3 — The active Filter Exclusion

Even with limit=500, some current markets still weren't appearing. The initial standalone scan included active: True in the API params — a reasonable filter to skip closed markets.

Except: current February 2026 5-minute windows weren't being flagged as active=true by the API. They were open, live, accepting trades — but the active metadata field wasn't set.

Removing the filter and doing client-side filtering on closed=false + endDate was the fix. Let the API return everything open, and filter by actual close time rather than a metadata flag that isn't reliably populated.

Bug #4 — The Year Inference Trap

Now we had events. But they were still resolving to dates in December 2026.

The _parse_window() method parses market titles to extract the trading window. A title like "BTC Up or Down? Dec 19 12:35AM-12:40AM ET" gets parsed by extracting the month and day, then inferring the year using datetime.now().year — which is 2026.

December 19 + year 2026 = December 19, 2026. A date 10 months in the future.

The scanner's sanity check filtered any event with an end date in the future. Which was now every single event we found, because we'd assigned them all a year that didn't exist yet.

# Bad — inferred year causes Dec 2025 → Dec 2026
end_dt = datetime(2026, 12, 19, ...)  # WRONG

# Fix — use authoritative endDate from API to validate
event_end_str = event.get("endDate", "")
if event_end_str:
    event_end_dt = datetime.fromisoformat(event_end_str.replace("Z", "+00:00"))
    if event_end_dt <= now:
        continue  # Skip — this event is actually in the past
WARNING

Never infer temporal context from partial data when the authoritative answer is available. The API's endDate field knows exactly when the market closes. Trust it over any date you reconstruct from a title string.

The rule: APIs are authoritative. Inferences are guesses. When both are available, the API wins.

Bug #5 — The Timeframe Detection Gap

Markets were resolving now. But their timeframes were coming back as None, which meant they weren't slotting into any asset/timeframe bucket.

_detect_timeframe() worked by scanning the title for keywords: "5m", "15m", "1h". Simple pattern match.

But current 5-minute market titles don't contain the string "5m". They look like this:

"BTC Up or Down? 12:35AM-12:40AM ET"

No "5m" anywhere. Just two timestamps with a 5-minute gap between them.

The fix was a duration-based fallback: if no keyword matches, extract the timestamps from the title and compute the delta.

_duration_to_tf = {5: "5m", 15: "15m", 60: "1h", 240: "4h", 1440: "daily"}

if not tf:
    wm = WINDOW_RE.search(title)
    if wm:
        try:
            t1 = datetime.strptime(wm.group(2), "%I:%M%p").replace(tzinfo=ET)
            t2 = datetime.strptime(wm.group(3), "%I:%M%p").replace(tzinfo=ET)
            mins = int((t2 - t1).total_seconds() / 60)
            if mins < 0:
                mins += 24 * 60  # handle midnight crossing: 11:55PM→12:00AM
            tf = _duration_to_tf.get(mins)
        except Exception:
            pass

The midnight crossing correction (+24*60) is subtle but critical. A window from 11:55 PM to 12:00 AM produces a raw delta of −1435 minutes. Without the correction, you'd try to look up −1435 in the duration map and get nothing. With it, you get 5 minutes. The right answer.

Bug #6 — The Slot Freeze (CodeRabbit Caught It)

After all five fixes, the scanner worked — once. On restart, 5m markets appeared and slotted correctly. But on the next scan cycle, the slots wouldn't update when windows expired.

Here's why: the needed set that drove the scan only added slots that were None. Once a standalone slot was registered the first time, it was considered "filled" and skipped on subsequent scans. When the 5-minute window expired and the next window opened, the slot just... sat there with the old expired market. It never rolled forward.

# Before — once filled, never rescanned
if existing is None:
    needed.add((asset, tf))

# After — standalone slots (series_id == 0) always rescan
if existing is None or existing.series_id == 0:
    needed.add((asset, tf))

This was flagged by CodeRabbit during PR review as a Critical issue. It's the kind of bug that only surfaces in production over time — everything looks fine for the first window, then silently stales on the second.

DOCTRINE

Ephemeral data needs ephemeral caching. A 5-minute market window expires every 5 minutes. Caching that slot for longer than its TTL guarantees staleness. If the underlying data has a known expiry, the cache must respect it.

The Result: 8 Slots Live

After all six fixes, the scanner populated its full matrix on the first scan cycle:

BTC_5m    ✅  "BTC Up or Down? 8:05AM-8:10AM ET"
BTC_15m   ✅  "Will BTC go Up or Down? (8AM-8:15AM)"
ETH_5m    ✅  "ETH Up or Down? 8:05AM-8:10AM ET"
ETH_15m   ✅  "Will ETH go Up or Down? (8AM-8:15AM)"
SOL_5m    ✅  "SOL Up or Down? 8:05AM-8:10AM ET"
SOL_15m   ✅  "Will SOL go Up or Down? (8AM-8:15AM)"
XRP_5m    ✅  "XRP Up or Down? 8:05AM-8:10AM ET"
XRP_15m   ✅  "Will XRP go Up or Down? (8AM-8:15AM)"
Market Slots
8/8
4 assets × 2 timeframes — all populated on first scan cycle

The 20-second snipe window for 5m markets is tight. The bot has less than 20 seconds before window close to identify an edge, score it, and place the order. That's by design — prices are most committed right before resolution, which is when the signal quality is highest.

Live win rate as of this writing: 4W-1L (80%) across all settled positions.

The Lessons That Actually Matter

1. Verify against live data, not assumptions.

We assumed series IDs 10684 and 10685 still had current markets on them. They didn't. The API had reorganized its data model. The only way to find that out was to query the live feed and look at what actually came back.

2. Paginated APIs lie by omission.

limit=200 returned 200 results — it just didn't include the 201st one, which happened to be the one we needed. No error. No warning. Just quiet incompleteness. Always verify your limit against the actual dataset size.

3. The API is authoritative. Your inference is a guess.

Year inference from partial date strings, keyword detection from market titles, active flag assumptions — every one of these was a guess that the raw API data could have corrected immediately. When the authoritative answer is available, use it. When it isn't, validate your inference against something real.

4. Ephemeral data needs expiry-aware caching.

Caching a 5-minute market slot without respecting the 5-minute TTL produces correct results for exactly one window and wrong results for every one after. The cache must be as dynamic as the data it holds.

5. Root cause chains are normal. Track them.

None of these bugs were obvious in isolation. Each one looked like it might be the real problem — until it was fixed and the next one surfaced. The instinct is to declare victory too early. The discipline is to keep going until the system behaves the way the specification says it should.

SIGNAL

One toggle became six bugs. The bot now trades 8 live market slots instead of 4. The debugging chain was the feature.


This is the third engineering post in the PolyEdge series. Previous posts covered the initial deployment and the Horus self-healing watchdog. The bot is live 24/7 on a Mac Mini, trading real money, and the retros keep coming.

Explore the Invictus Labs Ecosystem

// Join the Network

Follow the Signal

If this was useful, follow along. Daily intelligence across AI, crypto, and strategy — before the mainstream catches on.

No spam. Unsubscribe anytime.

Share
// More SignalsAll Posts →