Clock Staleness Remediation
How to fix Clock sysvar usage in financial logic to prevent post-halt exploitation.
Clock Staleness Remediation
Overview
Related Detector: Clock Staleness
After Solana network halts, the Clock sysvar resumes from a stale timestamp. Programs that use timestamps in financial logic (vesting, expiration, TWAP) can be exploited in the window between restart and clock catch-up. The fix is to implement staleness bounds that reject operations when the clock is too far behind expected values.
Recommended Fix
Before (Vulnerable)
pub fn claim_vested_tokens(ctx: Context<Claim>) -> Result<()> {
let clock = Clock::get()?;
let vesting = &ctx.accounts.vesting;
// VULNERABLE: no staleness check
if clock.unix_timestamp >= vesting.unlock_time {
token::transfer(ctx.accounts.transfer_ctx(), vesting.amount)?;
}
Ok(())
}
After (Fixed)
const MAX_STALE_SLOTS: u64 = 600; // ~5 minutes at 400ms/slot
pub fn claim_vested_tokens(ctx: Context<Claim>) -> Result<()> {
let clock = Clock::get()?;
let global = &mut ctx.accounts.global_state;
// FIXED: reject if clock appears stale
let slot_delta = clock.slot.saturating_sub(global.last_known_slot);
require!(
slot_delta <= MAX_STALE_SLOTS || global.last_known_slot == 0,
ErrorCode::ClockStale
);
let vesting = &ctx.accounts.vesting;
if clock.unix_timestamp >= vesting.unlock_time {
token::transfer(ctx.accounts.transfer_ctx(), vesting.amount)?;
}
// Update tracking
global.last_known_slot = clock.slot;
global.last_known_timestamp = clock.unix_timestamp;
Ok(())
}
Alternative Mitigations
1. Use slot numbers instead of timestamps
Slot numbers are monotonically increasing and less susceptible to manipulation than timestamps:
pub fn check_deadline(clock: &Clock, deadline_slot: u64) -> Result<()> {
// Slots advance predictably; less affected by clock drift
require!(clock.slot <= deadline_slot, ErrorCode::DeadlinePassed);
Ok(())
}
2. Emergency pause mechanism
Implement an admin-controlled pause that activates after network restarts:
pub fn process(ctx: Context<Process>) -> Result<()> {
let config = &ctx.accounts.config;
require!(!config.paused, ErrorCode::Paused);
// Normal processing...
Ok(())
}
pub fn set_pause(ctx: Context<Admin>, paused: bool) -> Result<()> {
ctx.accounts.config.paused = paused;
Ok(())
}
3. Tolerance windows for time comparisons
Instead of exact timestamp checks, use tolerance ranges:
const TOLERANCE_SECONDS: i64 = 600; // 10-minute tolerance
pub fn check_expiry(clock: &Clock, expiry: i64) -> bool {
// Allow operations only if clock is within tolerance of expiry
clock.unix_timestamp >= expiry
&& clock.unix_timestamp <= expiry + TOLERANCE_SECONDS
}
Common Mistakes
Mistake 1: Only checking timestamps, not slots
// WRONG: timestamp alone can be stale after halt
if clock.unix_timestamp > last_update + 300 {
return Err(ErrorCode::Stale);
}
Timestamps are derived from validator votes and can lag. Cross-check both slot and timestamp for robust staleness detection.
Mistake 2: Setting staleness threshold too high
// WRONG: 24-hour staleness window is too permissive
const MAX_STALE: i64 = 86400;
Network restarts typically resolve within minutes to hours. A staleness threshold of 5-15 minutes is appropriate for most DeFi applications.
Mistake 3: Not storing last-known values
// WRONG: no reference point to detect staleness
let clock = Clock::get()?;
if clock.unix_timestamp > expiry { ... }
Without storing the last-seen slot and timestamp, there is no baseline to compare against for staleness detection.