RPC server
RPC servers allow access into the polkadot/kusama and parachains ecosystem. Stakeworld runs multiple public archive RPC servers:
- Polkadot: dot-rpc.stakeworld.io
- AssetHub: dot-rpc.stakeworld.io/assethub
- BridgeHub: dot-rpc.stakeworld.io/bridgehub
- Collectives: dot-rpc.stakeworld.io/collectives
- Kusama: ksm-rpc.stakeworld.io
- AssetHub: ksm-rpc.stakeworld.io/assethub
- BridgeHub: ksm-rpc.stakeworld.io/bridgehub
- Encointer: ksm-rpc.stakeworld.io/encointer
- Westend: wnd-rpc.stakeworld.io
- AssetHub: wnd-rpc.stakeworld.io/westmint
- Paseo: pas-rpc.stakeworld.io
- AssetHub: pas-rpc.stakeworld.io/assethub
Stakeworld RPC data
Requests in millions: (update: 28 sep 2024)
Chain | Requests in millions (6 months) | Requests in millions (per month) | Requests in millions (per day) |
---|---|---|---|
ksmcc3 | 8784 | 1464 | 48 |
asset-hub-kusama | 68 | 11 | 0 |
bridge-hub-kusama | 25 | 4 | 0 |
coretime-kusama | 6 | 1 | 0 |
encointer-kusama | 16 | 3 | 0 |
people-kusama | 141 | 24 | 1 |
polkadot | 7286 | 1214 | 40 |
asset-hub-polkadot | 4935 | 822 | 27 |
bridge-hub-polkadot | 40 | 7 | 0 |
collectives_polkadot | 21 | 3 | 0 |
coretime-polkadot | N/A | N/A | N/A |
people-polkadot | N/A | N/A | N/A |
paseo | 142 | 24 | 1 |
asset-hub-paseo | 8 | 1 | 0 |
westend2 | 162 | 27 | 1 |
asset-hub-westend | 39 | 6 | 0 |
Setting up your own secure RPC server
To access the polkadot, kusama and parachains networks we need some kind of access into the network. This can be achieved by setting up a node with a RPC server and allowing access to that RPC server via a secure websocket (wss) port. The default node setup already exposes a non secure ws socket on port 9944 (which can optionally be changed by the --ws-port
parameter), but for a more usable situation we need a secure websocket which is accesible through a public port.
Archive node vs pruned node
A pruned node knows only the recent information about the network and not its full history. Most frequently done actions can be done with a pruned node, for example see account balances, make transfers, setup session keys, staking, etc. An archive node has the full history (database) of the network and can be queried in all kind of ways, give information about transfers since the network started, historical balances, advanced queries about past events, etc.
An archive node requires a lot more diskspace. For an archive node you need the options --state-pruning archive --blocks-pruning archive
in your startup settings.
Inclusion in the Polkadot.js UI requires an archive node.
Secure the RPC server
Via the node startup settings you can choose what to expose with how many connections from where through your rpc server.
How many: You can set your maximum connections through --ws-max-connections
, for example --ws-max-connections 100
From where: by default localhost and the polkadot.js are allowed to access the RPC server, you can change this by setting --rpc-cors
, to allow access from everywhere you need --rpc-cors all
What: you can limit the methods to use with --rpc-methods
, an easy way to set this to a safe mode is --rpc-methods Safe
Secure the ws port
The ws port is preferably exposed from the outside as a ssl secured wss port. The "maintain wss" on the wiki already covers a lot of information about this, especially in relation to setting it up in a nginx configuration. This page is focussed more on a apache2 but principles are the same. The main idea is converting the non secure ws port to a secure wss port by putting it behind a ssl enabled proxy. So from outside one see's the ssl enabled apache2/nginx/other proxy server, witch redirect the request to the internal rpc node.
Using Apache2 for proxying
Apache2 is a little heavier then nginx but also has some more tweaking posibilities. You can run it in different modes, prefork, worker or event. We chose event since this seems best suited for high load enviroments. Downside is that you can't use the default php module and need to enable it via php-fm. The proxy_wstunnel module works out of the box.
apt install apache2
a2dismod mpm_prefork
a2enmod mpm_event proxy proxy_html proxy_http proxy_wstunnel rewrite ssl
If you want to enable php
apt install php-fpm
a2enmod proxy_fcgi setenvif
Enabling ssl through letsencrypt
There are multiple options for getting a ssl certificate, one popular (and free) being letsencrypt. Obtaining a letsencrypt certificate can be done through for example certbot or lego (which has more dns provider options).
Add the proxy to the apache2 config
The mod_proxy_wstunnel provides support for the tunnelling of web socket connections to a backend websockets server. The connection is automatically upgraded to a websocket connection. In a ssl enabled virtualhost add:
SSLProxyEngine on
ProxyRequests off
ProxyPass / ws://localhost:9944
ProxyPassReverse / ws://localhost:9944
Older versions of mod_proxy_wstunnel do not upgrade the connection automatically and will need the following config added:
RewriteEngine on
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule /(.*) ws://localhost:9944/$1 [P,L]
RewriteRule /(.*) http://localhost:9944/$1 [P,L]
Tweaking connections
The number of connections is limited by the node itself (--ws-max-connections
) but also by the number of threads available on the proxy server. For apache2 this can be tweaked by editing /etc/apache2/mods-enabled/mpm_event.conf
We are using:
StartServers 4
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 128
ThreadsPerChild 128
MaxRequestWorkers 896
MaxConnectionsPerChild 0
Rate limiting
Theoretically one client could use all connections/resources, draining the resources of the server and making it inaccessible. This can be countered by rate limiting the connections, for example by using mod_qos:
apt install libapache2-mod-qos
a2enmod qos
And edit /etc/apache2/mods-available/qos.conf
# allows max 50 connections from a single ip address:
QS_SrvMaxConnPerIP 50
Be carefull when running behind a load balancer (for example cloudflare) because the load balancer will only use a few ip's and thus can trigger the rate limit, iso in this case it is better to use the rate limit options from the load balancer itself.
Load balancing & failover
With multiple servers it is possible to build a load balancing or even a failover construction. A simple load balancing can be a round robin-robin dns up to a more advanced (dedicated) load balancer or a content delivery network (CDN) like cloudflare.
Stress testing
You can test basic usage by accessing your server through the polkadot.js UI as a custom endpoint. For example staking target display is RPC intensive and can give you an indication of performance.
There are also more dedicated stress testing solutions, we have forked the Dwellir repository for our testing.
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
nvm install --lts
npm install -g yarn
yarn global add artillery
yarn global add artillery-engine-substrate
cd /opt
git clone git@github.com:stakeworld/stakeworld-rpc-artillery.git
cd /opt/stakeworld-rpc-artillery
yarn
./run.sh
After the run.sh
you can edit some variables like how many connections per second, the wss node, etc. The following is a test run for 10 seconds with 10 connections per second and maximum 20 concurrent users.
config:
target: "wss://ksm-rpc.stakeworld.io"
processor: "./functions.js"
phases:
- duration: 10
arrivalRate: 10
maxVusers: 20
After this you get some info about the run and a report is created, which can be used for further diagnostics
--------------------------------
Summary report @ 23:38:40(+0100)
--------------------------------
vusers.completed: .............................................................. 100
vusers.created: ................................................................ 100
vusers.created_by_name.balance: ................................................ 35
vusers.created_by_name.complex_call: ........................................... 33
vusers.created_by_name.headers_blocks: ......................................... 32
vusers.failed: ................................................................. 0
vusers.session_length:
min: ......................................................................... 231.7
max: ......................................................................... 656.1
median: ...................................................................... 361.5
p95: ......................................................................... 518.1
p99: ......................................................................... 645.6
ws.messages_sent: .............................................................. 163
ws.send_rate: .................................................................. 29/sec
Log file: reports/report.json
Report generated: reports/report.html