TLDR:Documentation Index
Fetch the complete documentation index at: https://docs.chainstack.com/llms.txt
Use this file to discover all available pages before exploring further.
- Every Solana instruction gets a default compute unit (CU) allocation. Non-builtin programs get 200,000 CUs; builtin programs (System, Stake, Vote) get 3,000 CUs.
- The transaction-level cap is 1,400,000 CUs.
- Priority fees are calculated from the requested CU limit, not actual usage — overspending on CUs wastes money.
- Use
simulateTransactionto measure actual CU usage, then setSetComputeUnitLimitto the measured value + 10% margin.
How compute units work
Every transaction on Solana consumes compute units (CUs). CUs are the Solana equivalent of gas — they measure the computational cost of executing instructions. The runtime enforces a CU budget per transaction; if the transaction exceeds it, execution aborts withComputationalBudgetExceeded.
Default allocation
When you don’t include aSetComputeUnitLimit instruction, the default budget is calculated based on instruction types:
- Builtin instructions (System Program, Stake, Vote, Compute Budget) — 3,000 CUs each per SIMD-0170
- Non-builtin instructions (your program, Token Program, any SBF-deployed program) — 200,000 CUs each
- The result is clamped to the transaction cap of 1,400,000 CUs
(2 × 3,000) + (1 × 200,000) = 206,000 CUs as the default budget.
Why defaults are wasteful
Consider a simple SOL transfer with a priority fee:- 2 builtin instructions (transfer + SetComputeUnitPrice) = 6,000 CUs allocated
- But the actual transfer uses ~450 CUs
SetComputeUnitLimit to 500 CUs (enough for the transfer + margin), the fee drops to 5,000,000 micro-lamports — a 12x reduction.
Get your own node endpoint today
Start for free and get your app to production levels immediately. No credit card required.You can sign up with your GitHub, X, Google, or Microsoft account.Compute Budget Program instructions
The Compute Budget Program (ComputeBudget111111111111111111111111111111) has 4 instructions:
| Instruction | Parameter | Type | Description |
|---|---|---|---|
SetComputeUnitLimit | units | u32 | Max CUs the transaction may consume |
SetComputeUnitPrice | micro_lamports | u64 | CU price in micro-lamports (priority fee) |
RequestHeapFrame | bytes | u32 | Heap size for each program (32K–256K, multiple of 1024) |
SetLoadedAccountsDataSizeLimit | bytes | u32 | Max total bytes of account data the transaction may load |
- Only one of each instruction variant is allowed per transaction. Duplicates cause a
DuplicateInstructionerror and the transaction fails. SetComputeUnitLimitaccepts any u32 but is clamped to 1,400,000.SetComputeUnitPriceaccepts any u64 (0 to u64::MAX).
Optimizing your compute budget
The optimal workflow is:- Simulate your transaction to measure actual CU consumption
- Add a margin (10–20%) to the simulated value
- Set the CU limit with
SetComputeUnitLimit - Set the CU price with
SetComputeUnitPricebased on recent priority fees
Step 1: Simulate and measure
Block-level limits
The Solana scheduler imposes block-level CU caps that affect how many transactions fit per block:- 48,000,000 CUs per block — total compute budget across all transactions
- 12,000,000 CUs per account per block — write-lock limit per account
Priority fee calculation
The total priority fee for a transaction:SetComputeUnitPrice is in micro-lamports and the result is in lamports. The division uses ceiling — non-integer results round up. The fee is always based on the requested limit, not actual usage.
Additional resources
- Priority fees for faster transactions — practical guide to setting priority fees
- Estimate priority fees with getRecentPrioritizationFees — dynamic fee estimation
- Analyzing adjacent transactions for priority fees — advanced fee analysis
- simulateTransaction — simulate to measure CU usage