Public RPC Dashboard

The Public RPC Dashboard page

Problem

RPC providers are essential infrastructure components in blockchains. Having accurate and real-time data on their reliability and speed is essential for all Web3 developers and users. But how to pick that very provider? The performance of the same provider varies by region and blockchain and may change over time.

That's why the Chainstack team has built the Public RPC Dashboard which enables Web3 developers and users to make data-driven decisions when selecting RPC providers.

🔷

Run nodes on Chainstack

Start for free and get your app to production levels immediately. No credit card required. You can sign up with your GitHub, X, Google, or Microsoft account.


Solution overview

The solution consists of two components: the data collection service and dashboard. The data collection service sends API calls to all providers, measures response times, and pushes collected data to the dashboard every minute.

The data collection service only records response times and marks requests based on whether they were successful or not. All calculations, including averages and other aggregated values, are performed by the dashboard.

Quick start

  1. Navigate to the dashboard
  2. Select your blockchain network: Ethereum, Base, Solana, TON
  3. Choose your region of interest: US West, Germany, Singapore
  4. Review performance metrics

Features

The dashboard displays:

  1. Response time and success rate for common RPC methods
  2. Block reception delay for EVM blockchains
  3. Historical performance trends by provider and region

The solution supports:

  • Blockchains: Ethereum, Base, Solana, TON
  • Regions: US West, Germany, Singapore
  • Providers: Alchemy (paid plan), Chainstack (paid plan), QuickNode (paid plan), Helius (paid plan), TonCenter (free plan)

📘

All metrics are collected every minute. Data retention period is 14 days.


How it works

Data collection service

The data collection service is Vercel serverless functions deployed to multiple regions. Metrics for each blockchain are collected by a dedicated function. We chose Vercel as our hosting solution due to its simplicity and time-to-production.

The methods and their parameters which are sent to providers are stored in the metrics folder. The example is provided below:

class HTTPAccBalanceLatencyMetric(HttpCallLatencyMetricBase):
    """
    Collects call latency for the `eth_getBalance` method.
    """

    def __init__(
        self,
        handler: "MetricsHandler",  # type: ignore
        metric_name: str,
        labels: MetricLabels,
        config: MetricConfig,
        **kwargs,
    ):
        super().__init__(
            handler=handler,
            metric_name=metric_name,
            labels=labels,
            config=config,
            method="eth_getBalance",
            method_params=["0x690B9A9E9aa1C9dB991C7721a92d351Db4FaC990", "pending"],
            **kwargs,
        )

Vercel (cron jobs) triggers all functions every minute. The final step of a function is pushing data to a Grafana Cloud Prometheus instance.

The service has the following performance thresholds:

  • Response timeout: 35 seconds
  • Block delay threshold: 35 seconds

Failures include:

  • Timeouts
  • Excessive block delays
  • Non-200 HTTP responses
  • JSON-RPC error responses

Dashboard

Grafana Cloud with its hosted Prometheus instance providers hassle-free services. Thanks to it, we can focus on the dashboard quality, rather than on supporting a Prometheus instance.

Grafana Cloud stores dashboard configurations as JSON files, which makes it easier to support and improve them. You can find the actual JSON files in the solution GitHub repo.

The Prometheus instance stores data for the last 14 days.


For developers

We encourage developers to fork or suggest improvements to the existing solution. This section helps you orient yourself in the code.

The data collection service is built using serverless Python functions. The project follows a modular architecture, structured around metric collection and processing. The core components live in the common folder, containing base classes and configuration. Each blockchain has its own metrics implementation that inherits from these base classes.

Our MetricsHandler manages collection and pushing data to Grafana. It creates metric instances based on configuration and executes collection in parallel using asyncio.

Metrics come in two types: WebSocket for subscriptions like new block notifications, and HTTP for regular RPC calls. Both inherit from BaseMetric class, which defines the core metric collection behavior. Each blockchain folder has metric implementations specific to its RPC methods.

The simplified version of the data collection service architecture is below.

The data collection service architecture (simplified)

Configuration lives in environment variables, defining endpoints, timeouts, and Grafana credentials. The service expects this configuration in standardized JSON format through the ENDPOINTS and other variables.

The project repository contains detailed setup instructions and environment examples.


FAQ

What providers do you monitor?
Alchemy, Chainstack, QuickNode, Helius, and TonCenter. Paid tiers (except for TonCenter).

Why do I need this dashboard?
It helps choose the right RPC provider based on real data and monitor their performance across regions.

How often is data updated?
Every minute.

What counts as a failed request?
Any response slower than 35 seconds, non-200 status codes, or responses containing error messages (as per JSON-RPC specification). For blocks, delays over 35 seconds count as failures.

Can I see historical data?
Yes, dashboards keep 14 days of performance history.

How do you collect the metrics?
We use serverless functions in three regions, measuring response times and success rates from each location.

What API methods do you test?
We focus on commonly used methods like balance checks, transaction simulation, and block queries. Each blockchain has its specific set of tested methods.

What's the difference between global and regional views?
Global shows aggregated performance across all regions, while regional views provide detailed metrics for specific locations.


Conclusion

The Public RPC Dashboard provides Web3 developers and users with a comprehensive tool for monitoring RPC provider performance across different regions. By offering real-time metrics on response times, success rates, and block delays, the dashboard enables its users to make data-driven infrastructure decisions.

About author

Anton Sauchyk

🥑 Developer Advocate @ Chainstack
💻 Helping people understand Web3 and blockchain development
Anton Sauchyk | GitHub Anton Sauchyk | LinkedIn