L2 is useful when you only need levels. It stops short when you need the actual resting orders in the book or the trigger orders sitting around them.
0xArchive now exposes full order-level data for Hyperliquid and HIP-3: L4 orderbooks, order lifecycle events, and TP/SL history. Available via REST API, real-time WebSocket streaming, and S3 Parquet bulk download.
Editor's note (March 25, 2026): Build includes replay and L4 checkpoint history. Pro includes L4 reconstruction, L4 diffs, order history, order flow, TP/SL, and the real-time L4 WebSocket feeds described below.
How to Get L4 Data
Get 14 days of Build free when you sign up at 0xArchive.io/x and create your first key. Start with checkpoint history and replay, then move to Pro when you need real-time L4 streaming and the full order-level routes.
No credit card required, no sales calls.
What L4 Gives You That L2 Doesn't
L2 aggregates orders into price levels - you see total size and order count, but not who placed what. L4 gives you the individual orders underneath each level: every limit order with its owner address, exact size, order ID, and full lifecycle from placement through fill or cancel.
L4 also includes trigger orders (stop-losses, take-profits, conditional entries) that are completely invisible on L2. These sit outside the visible spread waiting to fire - you can see where stops are stacked before they execute.
L2 level:
{ "price": 101.41, "size": 556108.19, "count": 6414 }L4 orders at that level:
{"oid": 349728378873, "side": "B", "price": 101.41, "size": 0.123, "user_address": "0x88cdbc..."}{"oid": 349725603726, "side": "B", "price": 101.41, "size": 4.933, "user_address": "0x202513..."}...6412 more ordersTrigger order (invisible on L2 - sits above the best ask as a dormant stop-buy):
{"oid": 349718394340, "side": "A", "price": 111.32, "size": 58.2, "user_address": "0x555408..."}Diffs: Tick-Level Book Reconstruction
L4 data is delivered as diffs - individual order mutations streamed in real time:
{"diff_type": "new", "oid": 349718184685, "side": "B", "price": 101.67, "size": 26.719, "user_address": "0x17c3c8..."}{"diff_type": "update", "oid": 349718184685, "side": "B", "price": 101.67, "size": 14.200, "user_address": "0x17c3c8..."}{"diff_type": "remove", "oid": 349718184685, "side": "B", "price": 101.67, "size": null, "user_address": "0x17c3c8..."}Three event types: new (order placed), update (size changed from partial fill or amend), remove (order canceled or fully filled). Apply them sequentially to reconstruct the book at any point in time.
Real-Time WebSocket Streaming
Subscribe to l4_diffs or hip3_l4_diffs on the WebSocket and you get seamless book construction:
Note: The snapshot includes both resting limit orders and dormant trigger orders (stops, TPs). Trigger orders appear on the opposite side of the spread - a stop-buy above the best ask, a stop-sell below the best bid - so the raw book looks "crossed." Use the L2 BBO as a boundary to separate the visible book from trigger orders. The trigger orders outside that boundary are where stops are clustering.
import websockets, json, asyncio
async def stream_l4(): uri = "wss://api.0xarchive.io/ws?apiKey=YOUR_KEY" async with websockets.connect(uri, max_size=20_000_000) as ws: await ws.send(json.dumps({ "op": "subscribe", "channel": "hip3_l4_diffs", "symbol": "xyz:CL" }))
book = {"bids": {}, "asks": {}} # oid -> {oid, side, price, size, user_address} snapshot_ts = None
async for msg in ws: data = json.loads(msg)
if data["type"] == "l4_snapshot": snapshot_ts = data["timestamp"] for order in data["data"]["bids"]: book["bids"][order["oid"]] = order for order in data["data"]["asks"]: book["asks"][order["oid"]] = order print(f"Snapshot: {len(book['bids'])} bids, {len(book['asks'])} asks")
elif data["type"] == "l4_batch" and snapshot_ts is not None: for diff in data["data"]: # Skip diffs older than snapshot (backfill) if diff["ts"] <= snapshot_ts: continue
side = "bids" if diff["side"] == "B" else "asks" oid = diff["oid"]
if diff["dt"] == "new": book[side][oid] = { "oid": oid, "side": diff["side"], "price": diff["px"], "size": diff["sz"], "user_address": diff["user"] } elif diff["dt"] == "update": if oid in book[side]: book[side][oid]["size"] = diff["sz"] elif diff["dt"] == "remove": book[side].pop(oid, None)
asyncio.run(stream_l4())Subscribe once, get the full book, apply diffs, stay in sync.
Order Lifecycle: Every Status Change
The orders endpoint captures the full lifecycle of every order - not just placement but every state transition:
open -> filledopen -> canceledopen -> triggered -> filledopen -> marginCanceled (liquidation)open -> reduceOnlyCanceledopen -> selfTradeCanceledopen -> siblingFilledCanceledEach event includes the order type (Limit, Stop Market, Take Profit Market, Stop Limit, Market), time-in-force (Gtc, Ioc, Alo), and flags for trigger orders, TP/SL, and reduce-only.
curl -H "X-API-Key: YOUR_KEY" \ "https://api.0xarchive.io/v1/hyperliquid/hip3/orders/xyz:CL/history?start=1773586275000&end=1773590000000&limit=3"{ "status": "open", "order_type": "Stop Market", "trigger_price": 111.32, "trigger_condition": "Price above 111.32", "is_trigger": true, "is_position_tpsl": true, "reduce_only": true, "user_address": "0x5554087bc849098f047699868f63291e2de79611", "side": "A", "oid": 349718394340, "tif": "Gtc"}TP/SL Clustering: Where Are Stops Stacked?
The dedicated TP/SL endpoint filters to trigger orders only - stop-losses, take-profits, and conditional entries. Use it to see where defensive orders cluster:
curl -H "X-API-Key: YOUR_KEY" \ "https://api.0xarchive.io/v1/hyperliquid/hip3/orders/xyz:CL/tpsl?start=1773586275000&end=1773590000000"Combine with reduce_only to separate defensive TP/SL (closing positions) from conditional entries (opening new positions).
Order Flow Aggregation
The flow endpoint aggregates order activity by time interval - orders placed, filled, canceled, triggered:
curl -H "X-API-Key: YOUR_KEY" \ "https://api.0xarchive.io/v1/hyperliquid/hip3/orders/xyz:CL/flow?start=1773586275000&end=1773590000000&interval=1h"Historical Book Reconstruction via REST
Reconstruct the full L4 orderbook at any historical timestamp. The API loads the nearest checkpoint and applies diffs forward automatically:
curl -H "X-API-Key: YOUR_KEY" \ "https://api.0xarchive.io/v1/hyperliquid/hip3/orderbook/xyz:CL/l4?timestamp=1773587400000"{ "symbol": "xyz:CL", "timestamp": "2026-03-15T15:10:00Z", "checkpoint_timestamp": "2026-03-15T15:05:17Z", "diffs_applied": 9104, "bid_count": 6414, "ask_count": 4434, "bids": [ {"oid": 349728378873, "user_address": "0x88cdbc...", "side": "B", "price": 101.41, "size": 0.123}, {"oid": 349725603726, "user_address": "0x202513...", "side": "B", "price": 101.40, "size": 4.933} ], "asks": [ {"oid": 349725608841, "user_address": "0x399965...", "side": "A", "price": 101.42, "size": 42.673} ]}Every order in the book, with the address that placed it. bid_count and ask_count reflect the full book including trigger orders.
Coverage
L4 data is live for all Hyperliquid perps and 120+ HIP-3 instruments including crude oil (xyz:CL), gold (flx:GOLD), silver (flx:SILVER), equities, and forex.
The same endpoints are available under both /v1/hyperliquid/ (perps) and /v1/hyperliquid/hip3/ (HIP-3).
REST Endpoints
Orderbook
GET /orderbook/{symbol}/l4 - Reconstructed L4 book at a timestampGET /orderbook/{symbol}/l4/diffs - Raw L4 diffs (cursor-paginated)GET /orderbook/{symbol}/l4/history - Checkpoint timestamps and order countsOrders
GET /orders/{symbol}/history - Full order lifecycle eventsGET /orders/{symbol}/flow - Order flow aggregation by intervalGET /orders/{symbol}/tpsl - TP/SL trigger order historyWebSocket Channels
l4_diffs (snapshot + live diffs), l4_orders (order lifecycle events)hip3_l4_diffs (snapshot + live diffs), hip3_l4_orders (order lifecycle events)Bulk Download
L4 data is available as Parquet files via the Data Catalog. Three schemas available: diffs, checkpoints (reconstruction seeds), and full order history including TP/SL and trigger orders.
Getting Started
Get 14 days of Build free when you sign up at 0xArchive.io/x. No credit card required, no sales calls. Upgrade to Pro when full order-level workflows become daily work.
Limits:
