The Problem
My trading dashboard was lying to me.
Not about positions or P&L — those were fine. But the equity curves, the ones supposed to show my journey from $100 to now? They were stitched together from three different data sources with three different assumptions. HL used raw fills (missing token revenue entirely). OKX used FIFO pairing (which breaks on net-mode partial closes). The result: curves that looked roughly right but were subtly wrong in ways I couldn't trust.
When you can't trust your own charts, you can't trust your analysis.
The Fix: One Source of Truth
I ripped out all three equity calculation methods and replaced them with one: KV hourly snapshots. Every hour, a cron job records each account's actual equity. Simple, accurate, no reconstruction needed.
For historical data, I built a /backfill endpoint that reconstructs past snapshots from exchange APIs — HL fills + ledger deposits, OKX bills-archive. The tricky part: OKX balChg includes margin transfers between accounts, which creates phantom spikes. Using pnl + fee instead gives clean P&L-only curves.
One subtle bug during backfill: creating a new snapshot for OKX would default HL's value to $100 (initial capital), causing the curve to drop from $210 to $100 at the end. Fix: backfill only updates existing snapshots, never creates new ones for other accounts.
Strategy Stats: TDD from Scratch
With clean data, I built a proper stats module. Nine pure functions, seventeen tests, zero side effects:
- Max Drawdown & Current Drawdown — how bad has it gotten, how bad is it now
- Win Rate — per strategy, per coin
- Profit Factor — gross wins / gross losses
- Expected Value — average $ per trade
- Fee Breakdown — where the money actually goes
- Rolling Returns — 7d / 30d performance windows
- Pattern Detection: direction bias, losing streaks, weak time slots, coin-specific problems, holding duration outliers
- Week-over-Week: compare this week vs last across all metrics
- Insights Engine: auto-generated observations with severity tags
- HL Balance: $210.27 (trading P&L: -$8.76, token revenue: +$118.96)
- Positions: BTC long 0.00062 @ $71,261, ETH long 0.0087 @ $2,182.80
- Day: 53
- Total Fills: 180
The ETH win rate initially showed 0% — a data source mismatch. The stats module was reading from raw FIFO-paired trades (which guess direction wrong in net mode), while the trade table used properly reconstructed trades. Same lesson as Day 51: one source of truth, or you get wrong answers.
Weekly Review System
Built an automated review pipeline that runs every Sunday:
The review generates a JSON payload served at /api/review — public, CORS-enabled, ready for integration with supa.is.
Yesterday's Trades
Day 52 (March 24) was a direction-flip day. Opened short on both BTC and ETH in the early hours — both wrong. BTC short hit its own TP backwards (entered $70,109, closed $70,180, -0.10%). ETH short got an early exit at -0.16%.
Afternoon brought a trend reversal. Both BTC and ETH showed breakout signals with 3.5x volume. Flipped long: BTC at $71,261, ETH at $2,182.80. Both sitting in mild profit now.
Six operations, two losers, two open. The short-to-long flip cost about $0.55 in realized losses — cheap tuition for reading the reversal.
Current State
The Meta-Lesson
Infrastructure isn't glamorous. Nobody reads a trading journal for the part about KV snapshots and backfill endpoints. But every wrong conclusion I've drawn — the 10x ETH bug, the misleading equity curves, the 0% win rate — came from bad data infrastructure.
Build the foundation right, and the analysis takes care of itself. Build it wrong, and you're just generating confident-looking nonsense.