How to store your Web3 DApp secrets: Guide to environment variables
As a DApp developer, it is essential to ensure that sensitive information such as private keys, access tokens, and node endpoint URLs are properly safeguarded to prevent unauthorized access. Typically, such information is called a “secret” or an “environment variable”, and there is an extensive list of possible approaches for its application.
This guide will explore the various methods for storing environment variable secrets and the best practices for implementing them in your DApp. So, by the end of the guide, you will have a solid understanding of how to securely store environment variable secrets for your Web3 DApp, whether it is in development, staging, or production setting.
And considering just how rampant poor security practices for environment variables are in the industry, this guide is not only nice to have, but an essential reading material for many Web3 BUIDLers out there, regardless of their level. That being said, let’s dig into the details of DApp secrets and environment variables.
What are environment variables?
Environment variables are values that can be passed to a computer's operating system or an application at runtime. These values can be used to configure the behavior of the operating system or application, and they are often used to store sensitive information such as passwords and private keys. Environment variables are typically set in a script or configuration file and can be accessed by the operating system or application using predefined methods.
In a blockchain context, a Web3 DApp may use an environment variable to specify the account’s private key that it should use to sign transactions or the network that it should be deployed. This allows Web3 developers to change the behavior of the DApp without having to make changes to its code base. This prevents sensitive information from being hard-coded into the DApp's code, which could be potentially accessed by unauthorized parties.
Avoid front-end storage for environment variables
When running an app that sends requests to an API and includes a front end, it can be easy and convenient to store the secret keys directly in the front end, but they are also very easy to expose and exploit this way. This can create a multitude of issues for both you and your users, ranging from data leaks and unauthorized access all the way to the dreaded denial of service (DDoS) attack vector.
Example of API keys stored in the front end
Check out this simple web app we built and deployed to showcase such a scenario. The app uses the Etherscan API to retrieve the latest block number on the Ethereum network. Open the browser DevTools to inspect the Network tab, then click the Latest block button to fetch the latest block on the Ethereum chain, you will see the request in the Network tab, and you can easily find the API request. Don’t worry; there is no actual secret in this case, as the Etherscan API allows you to make a limited number of requests without an API key.
Hint
Right-click → Inspect → Network
What are the consequences of storing API keys in the front end?
For several reasons, it is important to avoid using exposed environment variables in a Web3 DApp. Firstly, exposing environment variables can potentially allow unauthorized parties to access sensitive information such as private keys and access tokens. This can lead to preventable security risks that compromise the overall security of your DApp and make it vulnerable to bad actors.
Secondly, exposing environment variables can facilitate attackers in reverse engineering your DApp and potentially gain access to sensitive information or assets. Lastly, exposing environment variables can also help attackers to impersonate your DApp and carry out malicious actions.
Overall, it is best to avoid using exposed environment variables. Instead, use secure methods to store and access sensitive information with your Web3 DApp. Let’s have a look at some of the most popular methods to do just that securely in the following paragraphs.
How to store environment variables securely?
There are several ways to store environment variables securely in a Web3 DApp. In this guide, we will explore a few methods based on use cases.
-
Development, testing, and learning environment via an environment or configuration file
The first method involves storing environment variables in a separate configuration file or environment file, which can be accessed by the DApp at runtime. This file can be encrypted and password-protected to prevent unauthorized access. Typically this is done using the dotenv package and a
.env
file. This is a very good solution for testing or educational environments where you intend to push the app to a remote repository like GitHub, and no front end is involved. -
Custom solution for storing secrets in a secure database via a backend server
A way to store environment variables when using a front end is in a secure database via a backend server, which serves as a proxy. This method allows for fine-grained control over access to sensitive information. It can provide additional security measures such as encryption and access control, but it is also more complex to build and maintain.
-
Professional/enterprise level via a secret manager tool
Lastly, there is an extensive list of tools available that can be used to manage and securely store environment variables for a Web3 DApp. Some of the more popular options are Dotenv Vault, Microsoft Azure Key Vault, AWS Secrets Manager, Google Cloud Secret Manager, HashiCorp Vault, and Doppler. These tools often provide features such as encryption and access control to ensure that sensitive information is properly safeguarded.
Overall, the best approach for securely storing environment variables in a Web3 DApp will depend on the specific needs and requirements of your project. It is recommended to carefully evaluate the different options and choose the best approach for your use case’s security and compliance requirements.
Local and development environments
The dotenv package allows you to use .env
files to store secrets. Before platform and SaaS alternatives made it into the spotlight, developers used a .env
file that is not committed to a public repo to keep all their keys and secrets there.
While the approach is still used today, it is primarily recommended for use in local and development environments only. This is so because exposing your secrets is just one “commit” away from being exposed, and anyone with access to the .env
file basically has all the keys to your kingdom.
Using a .env
file helps you mitigate the risk of exposing your API keys, but you still need to be careful when pushing code to a remote repository. To have peace of mind, make sure to include a .gitignore
file in your local repository. A .gitignore
holds a list of directories and files that you want to “ignore” when pushing code to the version control platform, for example, GitHub.
Use a gitignore generator to make sure you cover all of the files and directories potentially holding sensitive data.
Example of a .gitignore
file:
// Dependency directories
node_modules/
jspm_packages/
// dotenv environment variables files
.env
.env.test
.env.production
Typically, this approach is best suited for small teams since it gets difficult to keep members of a large team in sync with just a .env
file. You can also use the dotenv
package to work with .env
files easily.
All you have to do is add a require
reference for its config method as early as possible in your code. Afterward, you can always fetch a particular secret with a process.env
call of the key, whose value you want to get like so.
How to use the dotenv package
Here’s what you need to do to replicate this for your environment:
-
Download and install node.js if you don’t have it already
-
Navigate to your project’s root folder using the CLI
-
Install the dotenv package from your CLI via npm:
npm install dotenv
-
Create a
.env
file and enter your secrets in the appropriate format:CHAINSTACK_NODE_URL="https://nd-123-456-789.p2pify.com/API_KEY"
-
Enter this as early as possible in your DApp’s JavaScript file to load the package once it runs:
require('dotenv').config();
-
Fetch your DApp secret like so:
const secret = process.env.CHAINSTACK_NODE_URL;
-
Confirm your secret is loading correctly in your script by logging the result in the console:
// If secret is declared console.log(secret); // If the secret is not declared console.log(process.env.YOUR_KEY);
-
Run your script and check if the result is correct:
$ node index.js // Script response YOUR_VALUE
Full script example
The best way to learn is through practice, so let's try to replicate the script in the video above step-by-step. Following is an example of what a .env
file looks like, how you can create one and import it in your script, as well as use its content anywhere throughout your JavaScript code.
For this particular case, we will use two secret key-value pairs that closely resemble those you would typically find in a real-world Web3 DApp context, even if it is a rather simple one.
First, create a .env
file in your project root and add your “secrets” to it:
ENDPOINT_URL='https://nd-123-456-789.p2pify.com/API_KEY'
ADDRESS='0x95222290DD7278Aa3Ddd389Cc1E1d165CC4BAfe5'
Then use the environment variables in JavaScript:
require('dotenv').config();
const endpoint = process.env.ENDPOINT_URL;
const address = process.env.ADDRESS;
console.log(`Environment variables in use: \n ${endpoint} \n ${address}`);
This repository is a good example of a simple project set up in a development environment that uses .env
and .gitignore
files to protect sensitive data. If you look into the .gitignore
file, you will see a list of files and directories that are not pushed to the remote repository, and you will not find a .env
file committed along with other files. To use this project, you need to create your own .env
file after you clone the repository.
Approach overview
- Good for testing/small projects/educational content
- Not ideal for big teams
- Does not properly protect API keys
When deploying an app, the API keys will need to be stored in the (public) repo as a .env
file, for example, as is the case for Vercel and Heroku. Alternatively, you can enter your production secrets within the platforms to give you a sense of security, but once they are pushed to the (public) repo, the keys will be visible to anyone.
Secure front-end usage via custom proxy
When deploying an app with a front end, an option to protect your REST API keys or RPC endpoints is to handle the request using a back-end proxy server.
A proxy server is a computer that sits between a device and the internet. In this case, when you access a website or online service, the request goes through the proxy server. The proxy server then sends the request to the website or service on your behalf and passes the response back to you.
Such a concept is really useful for this use case because you can have your front end send a request to the back-end proxy, which then passes it to the actual API or endpoint. This way, the front end never communicates directly with the API and endpoints, so you don’t need to store the secrets there.
There are several ways to do this, so in this section, you will learn how to build a simple proxy server to protect REST API keys and RPC endpoints.
You will find two examples further:
- A simple web app that uses the Etherscan API to retrieve the latest block number
- Another simple web app that sends a request to an Ethereum endpoint to retrieve an account balance
Build a proxy server to protect REST API keys
For this example, we built a simple app that uses the Etherscan API to retrieve the latest block from the Ethereum network and then display it on the screen.
You can find the source code and how to test it in its GitHub repository here.
The REST API proxy server
In this example, we use the express.js framework to build a simple server that we can use as a proxy to communicate with the Etherscan API.
Express.js is a web application framework for node.js designed for building web applications and APIs. It provides a set of features and tools for building web applications, including routing, middleware, and template engines.
In the repository, the index.js
file in the root directory holds the server’s code, and as you can see, is very straightforward:
// This is the server file.
const express = require('express')
const cors = require('cors')
const rateLimit = require('express-rate-limit')
require('dotenv').config()
const PORT = process.env.PORT || 3000
const app = express()
// Rate limiting, limit the number of requests a user can send within a specific amount of time.
// With this setup, the user can only make 100 request max every 10 minutes.
const limiter = rateLimit({
WindowMs: 10 * 60 * 1000, // 10 minutes in ms.
max: 100 // 100 request max.
})
app.use(limiter)
app.set('trust proxy', 1)
// Set static folder; this allows our server to pick up the HTML file in the src folder.
app.use(express.static('src'))
// Routes
// This route looks into the index.js file in the routes folder and picks up the '/' route.
app.use('/api', require('./routes'))
// Enable cors
app.use(cors())
app.listen(PORT, () => console.log(`Server running on port ${PORT}`))
This server runs on the port you specify in the .env
file of the project or port 3000 if you don’t specify it explicitly. It then creates a route to the index.js
file inside the routes directory, where the URL is built to then send a GET request to the Etherscan API. Here's the route file:
const express = require('express')
const router = express.Router()
const needle = require('needle') // You could use 'node-fetch' too, but it might have some conflicts.
// Env variables, taken from the .env file.
const ETHERSCAN_API_BASE_URL = process.env.ETHERSCAN_API_BASE_URL
const ETHERSCAN_API_KEY_NAME = process.env.ETHERSCAN_API_KEY_NAME
const ETHERSCAN_API_KEY_VALUE = process.env.ETHERSCAN_API_KEY_VALUE
// Route from the server file.
router.get('/', async (req, res) => {
try {
// URLSearchParams allows us to build the URL
const apiKey = new URLSearchParams({
[ETHERSCAN_API_KEY_NAME]: ETHERSCAN_API_KEY_VALUE,
})
// Build the full API URL using URLSearchParams
const fullUrl = `${ETHERSCAN_API_BASE_URL}&${apiKey}`
// Send the request to the Etherscan API, and retrieve the JSON body response.
const apiResponse = await needle('get', fullUrl)
const data = apiResponse.body
res.status(200).json(data)
} catch (error) {
res.status(500).json({
error
})
}
})
module.exports = router
This simple solution helps protect your API keys but is not bulletproof. A skilled bad actor could still find where the request is sent to, and although they could not extract your secret API keys, they could still flood the servers with requests. This could severely slow down the service you offer, drive up your costs in case of usage costs, and even prevent access to it entirely in a denial of service (DDoS) attack vector.
To mitigate these issues, we included a rate limiter and enabled Cross-Origin Resource Sharing (CORS) in the server file.
The rate limiter is customizable and set up only to allow 100 requests every 10 minutes.
const limiter = rateLimit({
WindowMs: 10 * 60 * 1000, // 10 minutes in ms.
max: 100 // 100 request max.
})
app.use(limiter)
CORS, instead, is a mechanism that allows a web server to serve resources to a web page from a different domain than the one it originated from. This allows web pages to access resources from APIs or other servers hosted on a different domain. It allows the server to specify which domains are allowed to request it.
It is important for security because it helps prevent malicious websites from making unauthorized requests to servers on behalf of the user and helps to protect against cross-site scripting (XSS) attacks and other types of malicious activity.
What is whitelisting
Whitelisting is a relatively common practice to protect systems and networks, and it’s a security measure that maintains a list of approved applications or domains that are allowed to run on a system, network, or device while blocking all others.
CORS, which we mentioned earlier, is an example of this; it provides a way to specify the domains that are allowed to access restricted resources on a web page from an external domain outside of your own.
This approach provides a higher level of security than other security measures, such as blacklisting, which blocks known malicious applications or software. With whitelisting, even if an unknown or new threat arises, it cannot run on the system or network because it is not on the approved whitelist.
Implementing whitelisting is highly recommended when designing your application, and many providers offer such options.
Whitelisting limitations
While whitelisting is a valuable practice that should be included in your DApp design, it's not foolproof and can still be exploited. In the Web3 world, whitelisting is commonly implemented by allowing only specific domains or IP addresses to send requests to a server or RPC endpoint. This can add a layer of protection, but it shouldn't be the only thing you rely on to secure your RPC endpoint—particularly if it's exposed on the front end.
The main attacks whitelists are susceptible to are distributed denial of service (DDoS) and spoofing.
DDoS attacks
Distributed Denial of Service (DDoS) attacks overwhelm targeted systems or networks with massive amounts of traffic from numerous sources, leading to increased resource usage and potentially making them inaccessible to legitimate users.
You might wonder how this issue arises when domains aren't on the whitelist. When a request is sent to an endpoint, the server receives and responds to it regardless of your authorization status. If you have permission, the server will process your request and provide the relevant information. If access is denied, the server will return an error message. Unfortunately, this process can be exploited to consume server resources during a DDoS attack.
Consider the following response examples from whitelisted endpoints:
{"jsonrpc":"2.0","error":{"code":-32002,"message":"rejected due to project ID settings"}}
{"jsonrpc":"2.0","error":{"code":0,"message":"not authorized"},"id":null}
As you can see, the endpoint responds even though we are not allowed to use it, and during a DDoS attack, this might even cause the node to go out of sync and be unusable.
Spoofing attacks
In a spoofing attack, a bad actor pretends to be someone or something trustworthy to sneak into systems, steal private information, or mess up communication between people. The attacker changes or hides information like IP addresses, email addresses, or website links to trick users or systems into thinking they're interacting with a real, safe source.
In our specific case, IP spoofing and bypassing CORS policies are possible exploitations.
During an IP spoofing attack, the attacker changes the source IP address in the packets they send to make it seem like they are coming from a trusted source. This can be used to get around network security measures, launch DDoS attacks, or deceive users into giving away sensitive information. In this case, this could lead to getting around the IP whitelist, and the attacker could flood your endpoint with requests consuming your resources and driving up your costs; a severe DDoS attack could even bring your service down.
Cross-origin resource sharing it’s a way to whitelist domains so that only the allowed ones can request restricted web resources. This approach is also vulnerable to attacks as the CORS policy might be misconfigured or taken advantage of.
The following is a list of precautions that should be followed:
- Properly configure the
Access-Control-Allow-Origin
header. Ensure that sensitive information is not exposed by specifying the origin in theAccess-Control-Allow-Origin
header. This should be set to a trusted domain rather than using a wildcard (*
) ornull
value. - Only allow trusted sites. The origin specified in the
Access-Control-Allow-Origin
header should only be sites that are trusted. Avoid reflecting origins from cross-origin requests without proper validation, as this is easily exploitable. - Avoid using null. Avoid using the
Access-Control-Allow-Origin: null
header, as cross-origin resource calls from internal documents and sandboxed requests can specify the null origin. CORS headers should be properly defined in respect of trusted origins for private and public servers. - Avoid wildcards in internal networks. Avoid using wildcards in internal networks, as trusting network configuration alone to protect internal resources is not sufficient when internal browsers can access untrusted external domains.
- Use proper server-side security policies. Remember that CORS is not a substitute for server-side protection of sensitive data. An attacker can directly forge a request from any trusted origin. Therefore, web servers should continue to apply protections over sensitive data, such as authentication and session management, in addition to properly configured CORS.
By taking these steps, web developers and administrators can prevent common CORS-based attacks and better protect sensitive data.
To learn more, see Exploiting CORS – How to Pentest Cross-Origin Resource Sharing Vulnerabilities from freeCodeCamp.
Implement security measures
Designing a truly secure DApp is not easy, and you should take a comprehensive approach to security from the very beginning as it’s difficult to implement major changes once the project scales. This is why you will often see even big and famous projects with this kind of vulnerability.
To avoid this, you need to take a comprehensive approach to security, using the strategies we talked about and integrating with third-party tools like Cloudflare and NGINX reverse proxy services. These tools are especially helpful for big companies who want to make their DApps as safe as possible.
In addition to third-party tools, consider following the Secure Software Development Lifecycle and the OWASP Top 10 web application security risks.
Connect front end and proxy server
Now that we understand how the back end works let’s see how it connects to the front end. In our small project, this is done by the script.js
file in the src
directory.
async function fetchBlock() {
let displayBlock = document.getElementById("block-number")
// Fetch the latest block number querying the Etherscan API
const response = await fetch("/api")
// Place the JSON response in a constant and log it
const blockHex = await response.json()
console.log(`The latest block in HEX is: ${blockHex.result}`)
// Extract the HEX block value from the JSON
const blockString = JSON.stringify(blockHex.result)
//console.log(blockString)
// Slice the HEX value to remove 0x
const slicedBlock = blockString.slice(3, -1)
//console.log(slicedBlock)
// Convert and log the decimal block number
const blockDecimal = parseInt(slicedBlock, 16)
console.log(`The latest block number is: ${blockDecimal}`)
displayBlock.innerHTML = blockDecimal
}
As you can see, this script handles the action coming from the HTML. In this case, when the user presses the button, the script sends a request to our backend API, parses the response, and displays it in the HTML.
Check out the repository—you can clone it and run the app locally. This will give you a very good idea of how the logic works. If you inspect the source code, you will only see that the request is sent to the API, but you won’t find any secret.
To use this concept on a deployed app, you’ll have to deploy your back end separately and modify the script.js
file to send the request to that address.
Build a proxy server to protect an RPC endpoint
The principle is the same for this scenario; the substantial difference is that, in this case, we have to build the body of the POST request to send to the endpoint.
The following function handles this part:
async function fetchBalance() {
// Proxy server URL
// localhost when ran locally, you will need to add your server URL once deployed.
const url = 'http://localhost:4000/balance';
// Initialize the HTML element where the balance will be displayed in the front end
let displayBalance = document.getElementById("balance")
// The Ethereum address to query picked up from the front end
const address = document.getElementById("address").value
console.log(`Address to query: ${address}`)
// Send a POST request to the proxy server to query the balance
fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ address })
})
.then(response => response.json())
.then(data => {
// Handle the response data
console.log(`The balance is: ${data.balance} ETH`);
// Slice the response to only show the first 6 numbers
const slicedBalance = data.balance.slice(0, 7)
// Display the balance in the front end
displayBalance.innerHTML = `Balance: ${slicedBalance} ETH`
})
.catch(error => {
// Handle any errors
console.error(error);
});
}
Note that we specify the URL to the proxy server in const url = 'http://localhost:4000/balance';
. In this case, the server is running on the same machine, but you’ll have to add the address where you’ll deploy the server in case.
Then we build the POST request’s body. The response is then cleaned up a bit to display only the first six digits.
The same side effects apply to this concept, so implementing a rate limiter and CORS is again a good solution to avoid abuses.
You can find the source code and how to test it in its GitHub repository here.
How to connect the front end with the back end
So far, we've discussed building the back end, but let's now focus on integrating it with the front end.
In the script.js
file from the example showing how to protect an RPC endpoint, you'll notice a connection to the back end via a URL like this: http://localhost:4000/YOUR_ROUTE
.
This is because the server was initially tested in a local environment.
To make this work on the internet, follow these steps:
- Create a separate repository for the server.
- Add the server files to the new repository.
- Deploy the server using a platform like Digital Ocean.
- Set the environment variables in the deployment platform.
- Replace
http://localhost:4000/
in the front-end files with the URL of your deployed server.
By completing these steps, you'll successfully connect the front end to the back end, allowing your application to work online.
Production, CI/CD, and enterprise application
Now that we have explored some custom ways, let’s see how you could protect your secrets at an enterprise level.
For a cloud-based deployment in production, it is recommended to use a secure service like an online password/secret manager to safely handle your API keys and endpoints. They also allow you to separate your environment variables by permission levels, thus allowing your whole team to access its credentials from a single source.
For this guide, we will take a closer look at the Dotenv Vault secret manager, which is incredibly user-friendly. It is a perfect way for you to make the transition from development to production, both swiftly and easily, while still offering the exceptional security features you would need for a Web3 DApp in production.
Dotenv Vault sync anywhere
Dotenv Vault effectively utilizes the same approach as .git
and GitHub when handling your DApp secrets. This means you can use it as a private repo for everything you normally store in a .env
file.
Once you have set everything up in the Dotenv Vault interface, you can use the pull function to instantly sync it locally to any device you want via your CLI:
// CLI
// Connect to your Dotenv Vault project
npx dotenv-vault new INSERT_YOUR_VAULT_ID_HERE_OR_LEAVE_BLANK_TO_CREATE_NEW
// Log into your Dotenv Vault project
npx dotenv-vault login
// Open your Dotenv Vault project to introduce changes
npx dotenv-vault open
// Pull your Dotenv Vault project .env file to local storage
npx dotenv-vault pull
Even if you are just using Dotenv Vault for local file sync, you can also leverage its powerful feature set for your DApp. One of these is the ability to have different values for each key according to the environmental context you are currently using.
Thanks to this, you can quickly and easily move from development
to production
environment variables—all you need to do is pull the file for the appropriate environment:
// CLI
// Pull the development .env file
npx dotenv-vault pull development
// Pull the staging .env file
npx dotenv-vault pull staging
// Pull the ci .env file
npx dotenv-vault pull ci
// Pull the production .env file
npx dotenv-vault pull production
While Dotenv Vault’s sync anywhere feature is great already, it offers exceptionally easy integration with pretty much any platform. This means you can use it natively within your CI/CD workflow without any critical information hitting your HDD. And the best thing about it? Dotenv Vault uses first-party storage for all relevant data, so there is less risk of external partner leaks, as is the case for all other alternatives.
With Dotenv Vault’s Integrate Everywhere™ approach, you can do encrypted fetching of your environment variables completely in memory. Thanks to this priceless feature, you can rest easy knowing that exposed hardcoded variables and any security risks that originate from them are pretty much a thing of the past.
To use this method, you follow the same procedure as you would during a remote-to-local .env
file sync. Once you have the .env
file locally, you will need to build it and then fetch its decryption key. With the key safely in your possession, it can be entered into GitHub Actions ⇒ Secrets, as in this example, or any other CI/CD platform like Heroku, CircleCI, Netlify, and Vercel, among many others.
The same goes for other cloud build platforms like AWS, Google Cloud Platform (GCP), Gatsby, and even those supporting the containerized application process like Docker and Kubernetes. And once you have entered your decryption key in the appropriate platform’s settings, all you need to do is make a require
reference to the [dotenv-vault-core package](https://github.com/dotenv-org/dotenv-vault-core)
as early in your code as possible:
// index.js
require('dotenv-vault-core').config();
const endpoint = process.env.ENDPOINT_URL;
const address = process.env.PUBLIC_KEY;
console.log('Environment variables in use:\n' + endpoint + '...\n' + addresss);
Overall the workflow is quite accessible, even for the less tech-savvy, making it a perfect tool for you to make a swift and adequate transition from development to production. Here’s how it works in a GitHub Actions CI/CD workflow:
Other alternatives
Now that you know how the Dotenv Vault workflow plays out, it is important to note there are other secret manager alternatives too. Some of them are built into the corresponding platform, while others are available for external use just as well.
Overall, there are plenty of options to choose from, available on the market, whether you're looking for a free(mium) tool to do the basics, or a paid one that can offer you more advanced capabilities. Among the popular examples of alternative online secret managers are:
Conclusion
Now that you have made it this far, it should be quite clear that securely storing environment variables is essential for ensuring the seamless operation and integrity of any Web3 DApp. Overall, the most suitable approach for securely storing environment variables will depend on the specific requirements of your project.
By using secure methods such as configuration/environment files, secure databases via a backend server, or secret manager tools, you, as Web3 BUIDLers, can protect sensitive information and prevent unauthorized access. Additionally, following best practices such as encrypting and password-protecting environment variable files and implementing access control measures can further enhance the security of your DApp.
And by taking the time to store environment variables properly, you are effectively taking an important step towards building a secure and reliable Web3 landscape not just for yourself and your project but for everyone involved end-to-end. Isn’t that what we all want in the end?…
See also
About the authors
Updated about 1 year ago