Solana Transaction Speed: Priority Fees, Compute Units, and Getting Confirmed in 400ms

Optimize Solana transaction confirmation — setting correct compute unit limits, priority fee calculation based on recent blocks, transaction simulation before sending, and handling common confirmation failures.

Your Solana transaction sits unconfirmed for 30 seconds then expires. Priority fees fix this — but only if you calculate them correctly. You’ve read the stats: Solana processes 65,000 TPS theoretical maximum, sustaining 3,000–5,000 TPS in production (Solana Beach, 2025). Yet your simple token transfer is stuck in purgatory. The problem isn't the network's capacity; it's that you're competing in a fee market you don't understand. You're trying to pay highway tolls with pocket lint while everyone else is flashing E-ZPass. Let's fix that.

The Anatomy of a Stuck Transaction: Why Your TX Drops

A Solana transaction isn't a single action; it's a package with a shelf life. You sign a set of instructions with a recent blockhash. That blockhash is your transaction's "born on" date. If the network doesn't bake your transaction into a block before that blockhash expires (after about 150 slots, or ~1 minute), your transaction is dead. Poof. TransactionExpiredBlockheightExceeded.

But expiry is often just the final symptom. The real cause is usually dropping. Validators, who are economically rational actors, pick which transactions to include from their mempool. They sort by total fee yield per unit of compute. Your transaction's total fee is: (Base Fee + Priority Fee) * Compute Units (CUs)

If you set a priority fee of 0 microLamports (the default), your transaction is at the bottom of the pile. When network activity spikes—say, during a hot NFT mint or a Jupiter perp pump—your fee-less TX gets shoved aside indefinitely. It's not broken; it's just economically invisible.

Profiling Your Appetite: How to Find Your Exact Compute Unit Budget

You can't set the right priority fee if you don't know your transaction's compute cost. Solana charges for compute, not storage. Every instruction consumes Compute Units (CUs). The default budget per transaction is a measly 200,000 CUs. Blow past that, and you get the dreaded exceeded CU limit (200,000 default) error.

Guessing is for amateurs. You profile. Use the Solana CLI with a local test validator or, better, a devnet RPC.


solana-test-validator --log -q &

# Simulate your transaction and get the compute units
solana confirm -v YOUR_TRANSACTION_SIGNATURE --url devnet

Look for the computeUnitsConsumed field in the logs. For a more programmatic approach in your bot or backend, use @solana/web3.js simulation:

import { Connection, ComputeBudgetProgram, Transaction, PublicKey } from '@solana/web3.js';
import { createTransferInstruction } from '@solana/spl-token';

const connection = new Connection('https://api.devnet.solana.com');
const fromTokenAccount = new PublicKey('...');
const toTokenAccount = new PublicKey('...');
const owner = new PublicKey('...');
const mint = new PublicKey('...');

// Build the transfer instruction
const transferIx = createTransferInstruction(
    fromTokenAccount,
    toTokenAccount,
    owner,
    1000 // amount
);

// Create a transaction and simulate
let tx = new Transaction().add(transferIx);
const { value: simulationResult } = await connection.simulateTransaction(tx, { commitment: 'confirmed' });

if (simulationResult.err) {
    console.error('Simulation failed:', simulationResult.err);
} else {
    // This is your golden number
    const unitsConsumed = simulationResult.unitsConsumed;
    console.log(`Compute Units Consumed: ${unitsConsumed}`);
    
    // Now build the REAL transaction with the proper limit
    const modifyComputeUnits = ComputeBudgetProgram.setComputeUnitLimit({
        units: unitsConsumed + 1000 // Add a small buffer
    });
    
    tx = new Transaction().add(modifyComputeUnits, transferIx);
    // Now sign and send...
}

This simulation tells you the exact cost. An SPL token transfer typically consumes ~5,000 CUs. Compare that to an Ethereum ERC-20 transfer at 21,000 gas, and you see the raw efficiency, but you still need to budget for it.

OperationSolana Compute UnitsEthereum Gas (approx.)Relative Cost
SPL Token Transfer~5,000 CU21,000 gasBaseline
Simple Anchor Instruction~10,000 - 50,000 CU50,000 - 100,000 gas2-5x more complex
NFT Mint (Candy Machine)~150,000 - 200,000 CU150,000+ gasOften hits default limit
Complex DeFi Swap (Jupiter)~200,000 - 1,000,000+ CU200,000+ gasRequires increased budget

Real Error & Fix: You get exceeded CU limit (200,000 default). The fix is not to guess. Profile with connection.simulateTransaction() and prepend ComputeBudgetProgram.setComputeUnitLimit({ units: YOUR_LIMIT }) to your transaction instructions. Always add a 10-20% buffer for network variability.

Reading the Fee Market: Setting microLamports That Actually Work

Priority fees are denoted in microLamports per Compute Unit. 1 Lamport = 0.000000001 SOL. 1 microLamport = 0.000000000001 SOL. A "priority fee" is this microLamport rate * the CUs your transaction uses.

If you set 1,000 microLamports per CU and your TX uses 10,000 CUs, your total priority fee is: 1,000 * 10,000 = 10,000,000 microLamports = 0.00001 SOL.

That's on top of the base fee (5,000 Lamports = 0.000005 SOL, usually). So how do you know what rate to set? You scan recent blocks. Don't rely on a static number; the market moves faster than a memecoin pump.

async function getRecentPriorityFeeStats(connection) {
    // Get recent block production to sample
    const recentBlocks = await connection.getBlocks(Math.floor(Date.now() / 1000) - 60, 'finalized'); // Last ~minute
    const sampleBlock = recentBlocks[0];
    
    if (!sampleBlock) return null;
    
    const block = await connection.getBlock(sampleBlock, {
        maxSupportedTransactionVersion: 0,
        transactionDetails: 'accounts',
        rewards: false
    });
    
    if (!block?.transactions) return null;
    
    // Extract priority fees per CU from transactions in the block
    const feesPerCU = block.transactions.map(tx => {
        const meta = tx.meta;
        if (!meta || meta.err) return 0;
        // Priority fee is the fee minus the base fee (5,000 Lamports)
        const totalFee = meta.fee;
        const baseFee = 5000; // Lamports
        const priorityFeeLamports = Math.max(0, totalFee - baseFee);
        const computeUnits = meta.computeUnitsConsumed || 1; // Avoid div by zero
        // Convert to microLamports per CU
        return (priorityFeeLamports * 1_000_000) / computeUnits;
    }).filter(fee => fee > 0); // Filter out zero-fee txs
    
    if (feesPerCU.length === 0) return { avg: 0, median: 0, min: 0, max: 0 };
    
    const avg = feesPerCU.reduce((a, b) => a + b, 0) / feesPerCU.length;
    const sorted = feesPerCU.sort((a, b) => a - b);
    const median = sorted[Math.floor(sorted.length / 2)];
    
    return {
        avg: Math.ceil(avg),
        median: Math.ceil(median),
        min: Math.ceil(sorted[0]),
        max: Math.ceil(sorted[sorted.length - 1])
    };
}

// Use it: Setting a fee slightly above the median is a good start.
const stats = await getRecentPriorityFeeStats(connection);
const myPriorityFeeRate = stats ? stats.median + 1000 : 5000; // Add a small bump, default to 5000

This method gives you a data-driven rate. During calm periods, the median might be 1,000. During a mint, it could spike to 50,000 or more.

The Pro Move: Using the Helius Priority Fee API

Manually sampling blocks works, but for production bots or DeFi applications where every millisecond and satoshi counts, use the dedicated API from a premium RPC provider like Helius. Their priority-fee endpoint is the industry standard for accuracy.

import axios from 'axios'; // or use fetch

async function getHeliusPriorityFee(heliusApiKey, percentile = 50) {
    const url = `https://api.helius.xyz/v0/priority-fee?api-key=${heliusApiKey}`;
    // You can specify percentile and recent block samples
    const params = {
        percentile: percentile, // 50 for median, 90 for high priority, 99 for critical
        // You can also add `includeAllPriorityFeeLevels: true` for a full breakdown
    };
    
    try {
        const response = await axios.get(url, { params });
        // Response format: { "50th": 1234, "90th": 5678, "99th": 9999 }
        return response.data;
    } catch (error) {
        console.error('Failed to fetch Helius priority fee:', error);
        return null;
    }
}

// Integrate it into your transaction building
const heliusFees = await getHeliusPriorityFee(process.env.HELIUS_API_KEY, 90);
const priorityFeeRate = heliusFees ? heliusFees['90th'] : 10000;

const modifyComputeUnits = ComputeBudgetProgram.setComputeUnitLimit({ units: 150000 });
const addPriorityFee = ComputeBudgetProgram.setComputeUnitPrice({
    microLamports: priorityFeeRate
});

const tx = new Transaction()
    .add(modifyComputeUnits)
    .add(addPriorityFee)
    .add(yourActualInstruction);

Why Helius? Their p50 response time is ~45ms vs. a public RPC's 200ms+. For a trading bot, that latency difference is the gap between profit and liquidation. They aggregate data across their node fleet, giving you a more accurate market-wide fee estimate than sampling a single block.

The Pre-Flight Checklist: Simulating to Avoid Wasted SOL

Never send a transaction blind. Always simulate. Simulation is free and prevents you from burning SOL on fees for a transaction that will fail. It catches the two most common errors before they cost you money.

async function sendTransactionWithSimulation(connection, tx, signers) {
    // Step 1: Simulate
    const simulation = await connection.simulateTransaction(tx, { commitment: 'confirmed' });
    
    if (simulation.value.err) {
        console.error('❌ Simulation failed:', simulation.value.err);
        // Decode common errors
        if (simulation.value.err === 'BlockhashNotFound') {
            throw new Error('Blockhash expired. Fetch a fresh one.');
        }
        if (simulation.value.logs?.some(l => l.includes('account data too small'))) {
            throw new Error('Program failed to complete: account data too small — fix: increase account size in account struct, recalculate space with 8 (discriminator) + size_of::<YourStruct>()');
        }
        if (simulation.value.logs?.some(l => l.includes('custom program error: 0x1'))) {
            throw new Error('custom program error: 0x1 — fix: this is anchor\'s first custom error; check your Errors enum, use anchor error codes for debugging');
        }
        return null; // Fail fast
    }
    
    console.log(`✅ Simulation passed. CU: ${simulation.value.unitsConsumed}`);
    
    // Step 2: Send for real
    const signature = await connection.sendTransaction(tx, signers, {
        skipPreflight: false, // Let the RPC do a quick preflight check too
        preflightCommitment: 'confirmed'
    });
    
    // Step 3: Confirm with a specific commitment level
    const confirmation = await connection.confirmTransaction({
        signature,
        blockhash: tx.recentBlockhash,
        lastValidBlockHeight: tx.lastValidBlockHeight
    }, 'confirmed'); // 'confirmed' is faster than 'finalized' for most use cases
    
    if (confirmation.value.err) {
        throw new Error(`Transaction confirmed but failed: ${confirmation.value.err}`);
    }
    
    return signature;
}

Real Error & Fix: Transaction simulation failed: Blockhash not found. This is the classic. The fix is procedural: always fetch a fresh blockhash immediately before constructing and signing your transaction. Don't reuse them. Use const { blockhash } = await connection.getLatestBlockhash('confirmed');.

Building a Retry Engine with Exponential Backoff

Even with perfect fees, sometimes you lose the slot lottery. Your retry logic must be robust. The key is to build a new transaction with a fresh blockhash and the same instructions. Do not just re-send the old, signed transaction—it's already expired.

async function sendTransactionWithRetry(
    connection,
    instructions, // The core instructions array
    signer,
    maxRetries = 3,
    baseDelayMs = 500
) {
    let lastError;
    
    for (let attempt = 0; attempt < maxRetries; attempt++) {
        try {
            // 1. FETCH FRESH BLOCKHASH EVERY TIME
            const { blockhash, lastValidBlockHeight } = await connection.getLatestBlockhash('confirmed');
            
            // 2. Rebuild the full transaction (with compute budget instructions if needed)
            const modifyComputeUnits = ComputeBudgetProgram.setComputeUnitLimit({ units: 150000 });
            const addPriorityFee = ComputeBudgetProgram.setComputeUnitPrice({ microLamports: 5000 });
            
            const tx = new Transaction({
                feePayer: signer.publicKey,
                recentBlockhash: blockhash,
                lastValidBlockHeight
            }).add(modifyComputeUnits, addPriorityFee, ...instructions);
            
            // 3. Sign
            tx.sign(signer);
            
            // 4. Send with simulation wrapper
            const signature = await sendTransactionWithSimulation(connection, tx, [signer]);
            if (signature) {
                console.log(`✅ Confirmed on attempt ${attempt + 1}: ${signature}`);
                return signature;
            }
            
        } catch (error) {
            lastError = error;
            console.warn(`Attempt ${attempt + 1} failed:`, error.message);
            
            // Exponential backoff with jitter
            const delay = baseDelayMs * Math.pow(2, attempt) + Math.random() * 100;
            await new Promise(resolve => setTimeout(resolve, delay));
        }
    }
    
    throw new Error(`Failed after ${maxRetries} retries: ${lastError?.message}`);
}

This pattern is non-negotiable for production systems. It respects the network's state and gives your transaction multiple shots at inclusion.

The Nuclear Option: Jito Bundles for Guaranteed Inclusion

When you absolutely, positively must get your transaction into the next block—think arbitrage, liquidation, or a time-sensitive governance vote—you need more than just priority fees. You need Jito Bundles.

Jito Labs operates a network of searchers and block engines. A "bundle" is a sequence of transactions that are guaranteed to execute in order, all or nothing, if included. Searchers pay validators directly via tips for bundle inclusion. This is the professional tier.

You don't interact with Jito directly; you use their RPC endpoints and a specific transaction format.

import { Connection } from '@solana/web3.js';

// Connect to a Jito RPC endpoint (you need access)
const jitoConnection = new Connection('https://jito-mainnet.helius-rpc.com', 'confirmed');

// Your transaction building is the same, but you send it as part of a bundle request.
// Note: Full bundle construction is more complex and often uses Jito's SDK or API.
// This example shows the conceptual shift.

async function sendJitoBundle(transactions) { // Array of signed Transaction objects
    const bundlePayload = {
        jsonrpc: "2.0",
        id: 1,
        method: "sendBundle",
        params: [transactions.map(tx => tx.serialize().toString('base64'))]
    };
    
    const response = await fetch(jitoConnection.rpcEndpoint, {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify(bundlePayload)
    });
    
    const result = await response.json();
    return result;
}

Using Jito adds cost (your tip + their fee) but provides near-certainty. It's overkill for a simple transfer but essential for the $2B daily volume flowing through Jupiter and other top-tier DeFi where latency is money.

Next Steps: From Working to Optimized

You now have the toolkit to go from expired transactions to reliable, fast confirmations. Your path forward:

  1. Instrument Everything: Add logging for every simulation's unitsConsumed and the priority fee rates you use. Build a historical dataset for your specific transactions.
  2. Benchmark Your RPC: Test the latency and reliability of your RPC provider. Use a simple ping-pong transaction to measure sendTransaction to confirmed time. Remember: Helius 45ms p50 vs public RPC 200ms+.
  3. Implement Dynamic Fee Adjustment: Don't hardcode a percentile for the Helius API. Have your application logic increase the percentile (from 50 to 90 to 99) after a retry failure, balancing cost against urgency.
  4. Consider the Full Stack: Your confirmation time isn't just network latency. It's Anchor program compilation time (~45s for a medium project), the speed of your signing mechanism (a hardware wallet vs. a hot wallet file), and your own bot's event loop. Profile it all.

Solana's average transaction cost is $0.00025 vs Ethereum mainnet's $3.50 — that's 14,000x cheaper (Q1 2026). That cheap base fee is a trap. It makes you think you can ignore the fee market. You can't. The real cost is in reliability and speed. Pay a few extra microLamports, profile your CUs, and simulate relentlessly. Your transactions will start landing in 400ms, and that RTX 4090 can get back to running Llama instead of watching pending transactions.