TLDR:
In the evolving world of blockchain, the efficiency of RPC nodes plays an important role in powering data-hungry decentralized applications (DApps). However, traditional metrics for evaluating these nodes often fail to provide a comprehensive view of their capabilities, especially regarding data fetching efficiency—a critical aspect for any blockchain application. This gap in assessment tools led to the development of Chainstack Compare, a cutting-edge Node Performance Tool designed to address this challenge.
Chainstack Compare offers a unique approach to measuring RPC node performance, focusing on the efficiency of data fetching from blockchain networks. This tool analyzes how EVM-compatible RPC nodes handle real-time blockchain data retrieval, going beyond the standard metrics of latency and requests per second.
In this article, we’ll jump into the world of RPC nodes, uncover the limitations of traditional performance metrics, and introduce the innovative methodology behind Chainstack Compare.
Try Chainstack Compare now.
Traditional methods to test RPC nodes’ performance primarily focus on measuring latency and the number of requests a node can handle per second. While these metrics are useful, they must fully capture the node’s ability to handle complex data-fetching tasks essential for blockchain applications. This limitation becomes particularly evident in applications where the goal is fetching and handling large amounts of data.
The true test of an RPC node’s performance lies in its ability to handle requests efficiently and its capacity to manage and fetch substantial volumes of data. This capability is particularly crucial for DApps that act as data aggregators, API providers, or some institutional users that need to ingest data directly from the source, where the sheer volume and complexity of the data being processed are significantly higher.
Traditional performance metrics fall short for such applications. They need to assess how well a node can handle data-fetching operations adequately.
Chainstack Compare innovatively bridges the gap in traditional performance analysis by introducing a pivotal metric: the rate of blocks processed per second, BPS. This metric goes beyond surface-level assessments, offering a deeper insight into a node’s operational efficiency in real-world conditions.
Chainstack Compare stands as a crucial asset for developers. It empowers them with a more refined and accurate evaluation of RPC node performance, specifically tailored to the needs of data-intensive DApps.
In this article section, we will get into the technical workings of Chainstack Compare, focusing on the underlying methodology and logic that powers its node performance tests. This exploration will explain Chainstack Compare’s approach to evaluating RPC nodes, particularly in fetching and processing blockchain data.
Chainstack Compare uses a comprehensive testing approach to assess the performance of EVM-compatible RPC nodes. The core of this evaluation lies in its ability to fetch and process large amounts of blockchain data, specifically focusing on retrieving the latest 127 blocks. This task concerns quantity and replicating realistic scenarios that an RPC node might encounter in a live blockchain environment.
In the journey to create a better way of testing RPC nodes for data ingestion performance, we quickly learned that there’s more to it than just the nodes themselves. The type of architecture and programming language plays a big role, too. This led to a couple of key decisions:
for
loops. That’s why we decided to use Python’s concurrency features. It’s not just about speeding things up; it’s about making the process smoother and more manageable. Sure, languages like Rust might offer more efficiency, but many of us are familiar with Python. It’s user-friendly and doesn’t scare away beginners.By taking this route, we’d like to shed some light on the whole process, making it easier for developers to understand what works best when dealing with blockchain data. It’s about finding that sweet spot between technical efficiency and practical usability.
At the heart of Chainstack Compare’s testing process is a Python-based system that leverages asynchronous programming and multithreading to optimize data retrieval and processing. This approach allows the tool to efficiently handle multiple tasks concurrently, significantly enhancing the speed and accuracy of data fetching.
ThreadPoolExecutor
for multithreading. This combination enables the tool to initiate and manage multiple block data fetch operations simultaneously, minimizing wait times and maximizing the throughput of data processing.eth_getblockByNumber
, eth_call
, and debug_traceBlockByNumber
.Read the following to understand the concept behind the asynchronous and multithreaded processing in Python Chainstack Compare uses: Mastering multithreading in Python for Web3 requests: A comprehensive guide
To highlight the effectiveness of Chainstack Compare’s methodology, we conducted benchmarks using Viem, a JavaScript Web3 library. We explored traditional JavaScript methods versus Python’s concurrency approach, which is foundational to Chainstack Compare.
Due to those results, I decided to use concurrency in Python as it could yield better performance.
After exploring traditional JavaScript fetching methods with Viem, we turned our focus to the concurrency features of Python to assess their impact on performance. The results shed light on Python’s significant advantages in efficiently managing large-scale data fetching tasks.
Our benchmarks were specifically designed to fetch varying numbers of blocks, aiming to understand how the volume of data and the configuration of concurrent workers affect performance. For these tests, we utilized 60 workers to optimize task execution. Here’s what we discovered:
The previous benchmarks were conducted on an M2 MacBook Pro 12-Core CPU with 16GB of memory, using a Chainstack Ethereum Global endpoint.
Start for free and get your app to production levels immediately.
Our benchmark tests clearly show that two main factors impact the efficiency of data processing tasks: the volume of data being processed (measured in blocks) and how many workers are running concurrently.
The hardware used in these tests—specifically, the CPU’s multi-core design and fast memory access significantly reduces the overhead of managing and switching between multiple concurrent tasks, enhancing the throughput of block-fetching operations.
This finding highlights the need to consider the hardware to optimize data processing tasks.
Solutions like Chainstack Compare can illuminate how RPC nodes distribute data and the varying performance levels across providers by considering the setup of concurrent workers and making the most of certain hardware features.
The deployed version of Chainstack Compare runs on more basic hardware. Currently, a machine running 2 vCPU and up to 2 GB of RAM. It highlights the performance difference between RPC nodes but displays less impressive BPS figures.
At this moment, Chainstack Compared is deployed on the following specs:
In evaluating the performance of an RPC node, the ability to rapidly fetch data is crucial but not the only consideration. This becomes particularly relevant in scenarios where a provider imposes stringent rate-limiting. To address this, Chainstack Compare adopts a comprehensive approach to testing RPC nodes that is both realistic and practical, focusing on several key aspects:
This approach allows Chainstack Compare to offer nuanced insights into RPC node performance, considering the complex interplay between software capabilities, hardware limitations, and provider restrictions.
Chainstack Compare offers a deep dive into RPC node performance that traditional metrics often overlook. Throughout this article, we’ve explored its unique approach to evaluating data fetching efficiency, a critical component for the smooth operation of DApps.
By integrating Python concurrency features, Chainstack Compare highlights the limitations of conventional methods and sets a new standard in node performance analysis. This tool is a testament to the importance of precision and adaptability in the blockchain domain, empowering developers with the insights needed to optimize DApps for the challenges of tomorrow.
TLDR:
In the evolving world of blockchain, the efficiency of RPC nodes plays an important role in powering data-hungry decentralized applications (DApps). However, traditional metrics for evaluating these nodes often fail to provide a comprehensive view of their capabilities, especially regarding data fetching efficiency—a critical aspect for any blockchain application. This gap in assessment tools led to the development of Chainstack Compare, a cutting-edge Node Performance Tool designed to address this challenge.
Chainstack Compare offers a unique approach to measuring RPC node performance, focusing on the efficiency of data fetching from blockchain networks. This tool analyzes how EVM-compatible RPC nodes handle real-time blockchain data retrieval, going beyond the standard metrics of latency and requests per second.
In this article, we’ll jump into the world of RPC nodes, uncover the limitations of traditional performance metrics, and introduce the innovative methodology behind Chainstack Compare.
Try Chainstack Compare now.
Traditional methods to test RPC nodes’ performance primarily focus on measuring latency and the number of requests a node can handle per second. While these metrics are useful, they must fully capture the node’s ability to handle complex data-fetching tasks essential for blockchain applications. This limitation becomes particularly evident in applications where the goal is fetching and handling large amounts of data.
The true test of an RPC node’s performance lies in its ability to handle requests efficiently and its capacity to manage and fetch substantial volumes of data. This capability is particularly crucial for DApps that act as data aggregators, API providers, or some institutional users that need to ingest data directly from the source, where the sheer volume and complexity of the data being processed are significantly higher.
Traditional performance metrics fall short for such applications. They need to assess how well a node can handle data-fetching operations adequately.
Chainstack Compare innovatively bridges the gap in traditional performance analysis by introducing a pivotal metric: the rate of blocks processed per second, BPS. This metric goes beyond surface-level assessments, offering a deeper insight into a node’s operational efficiency in real-world conditions.
Chainstack Compare stands as a crucial asset for developers. It empowers them with a more refined and accurate evaluation of RPC node performance, specifically tailored to the needs of data-intensive DApps.
In this article section, we will get into the technical workings of Chainstack Compare, focusing on the underlying methodology and logic that powers its node performance tests. This exploration will explain Chainstack Compare’s approach to evaluating RPC nodes, particularly in fetching and processing blockchain data.
Chainstack Compare uses a comprehensive testing approach to assess the performance of EVM-compatible RPC nodes. The core of this evaluation lies in its ability to fetch and process large amounts of blockchain data, specifically focusing on retrieving the latest 127 blocks. This task concerns quantity and replicating realistic scenarios that an RPC node might encounter in a live blockchain environment.
In the journey to create a better way of testing RPC nodes for data ingestion performance, we quickly learned that there’s more to it than just the nodes themselves. The type of architecture and programming language plays a big role, too. This led to a couple of key decisions:
for
loops. That’s why we decided to use Python’s concurrency features. It’s not just about speeding things up; it’s about making the process smoother and more manageable. Sure, languages like Rust might offer more efficiency, but many of us are familiar with Python. It’s user-friendly and doesn’t scare away beginners.By taking this route, we’d like to shed some light on the whole process, making it easier for developers to understand what works best when dealing with blockchain data. It’s about finding that sweet spot between technical efficiency and practical usability.
At the heart of Chainstack Compare’s testing process is a Python-based system that leverages asynchronous programming and multithreading to optimize data retrieval and processing. This approach allows the tool to efficiently handle multiple tasks concurrently, significantly enhancing the speed and accuracy of data fetching.
ThreadPoolExecutor
for multithreading. This combination enables the tool to initiate and manage multiple block data fetch operations simultaneously, minimizing wait times and maximizing the throughput of data processing.eth_getblockByNumber
, eth_call
, and debug_traceBlockByNumber
.Read the following to understand the concept behind the asynchronous and multithreaded processing in Python Chainstack Compare uses: Mastering multithreading in Python for Web3 requests: A comprehensive guide
To highlight the effectiveness of Chainstack Compare’s methodology, we conducted benchmarks using Viem, a JavaScript Web3 library. We explored traditional JavaScript methods versus Python’s concurrency approach, which is foundational to Chainstack Compare.
Due to those results, I decided to use concurrency in Python as it could yield better performance.
After exploring traditional JavaScript fetching methods with Viem, we turned our focus to the concurrency features of Python to assess their impact on performance. The results shed light on Python’s significant advantages in efficiently managing large-scale data fetching tasks.
Our benchmarks were specifically designed to fetch varying numbers of blocks, aiming to understand how the volume of data and the configuration of concurrent workers affect performance. For these tests, we utilized 60 workers to optimize task execution. Here’s what we discovered:
The previous benchmarks were conducted on an M2 MacBook Pro 12-Core CPU with 16GB of memory, using a Chainstack Ethereum Global endpoint.
Start for free and get your app to production levels immediately.
Our benchmark tests clearly show that two main factors impact the efficiency of data processing tasks: the volume of data being processed (measured in blocks) and how many workers are running concurrently.
The hardware used in these tests—specifically, the CPU’s multi-core design and fast memory access significantly reduces the overhead of managing and switching between multiple concurrent tasks, enhancing the throughput of block-fetching operations.
This finding highlights the need to consider the hardware to optimize data processing tasks.
Solutions like Chainstack Compare can illuminate how RPC nodes distribute data and the varying performance levels across providers by considering the setup of concurrent workers and making the most of certain hardware features.
The deployed version of Chainstack Compare runs on more basic hardware. Currently, a machine running 2 vCPU and up to 2 GB of RAM. It highlights the performance difference between RPC nodes but displays less impressive BPS figures.
At this moment, Chainstack Compared is deployed on the following specs:
In evaluating the performance of an RPC node, the ability to rapidly fetch data is crucial but not the only consideration. This becomes particularly relevant in scenarios where a provider imposes stringent rate-limiting. To address this, Chainstack Compare adopts a comprehensive approach to testing RPC nodes that is both realistic and practical, focusing on several key aspects:
This approach allows Chainstack Compare to offer nuanced insights into RPC node performance, considering the complex interplay between software capabilities, hardware limitations, and provider restrictions.
Chainstack Compare offers a deep dive into RPC node performance that traditional metrics often overlook. Throughout this article, we’ve explored its unique approach to evaluating data fetching efficiency, a critical component for the smooth operation of DApps.
By integrating Python concurrency features, Chainstack Compare highlights the limitations of conventional methods and sets a new standard in node performance analysis. This tool is a testament to the importance of precision and adaptability in the blockchain domain, empowering developers with the insights needed to optimize DApps for the challenges of tomorrow.