Manual Installation¶
This section describes how to manually install Hyperion and its environment. If you want more control of your installation, this is the way to go.
Warning
If you are running more than one node (Leap/Savanna), you can now configure the failover option directly in the connections.json file. Please refer to the section detailing the new parameters by clicking here
Info
Review the guidelines for configuring Hyperion for Provider Registration on Qry, a new decentralized ecosystem that provides access to a variety of data services and APIs. Follow the steps outlined in Config Provider steps
Attention
Recommended OS: Ubuntu 24.04
Dependencies¶
Below you can find the list of all Hyperion's dependencies:
- Elasticsearch 9.x
- Kibana 9.x
- RabbitMQ (v 4.x+)
- Redis
- MongoDB 8.x+
- Node.js v22+
- PM2
- NODEOS (Spring 1.2.2+ or Leap 5.0.3)
On the next steps you will install and configure each one of them.
Note
The Hyperion Indexer requires Node.js and pm2 to be on the same machine. All other dependencies (Elasticsearch, RabbitMQ, Redis and EOSIO) can be installed on different machines, preferably on a high speed and low latency network. Keep in mind that indexing speed will vary greatly depending on this configuration.
Elasticsearch¶
Follow the detailed installation instructions on the official Elasticsearch documentation and return to this guide before running it.
Info
Elasticsearch is not started automatically after installation. We recommend running it with systemd.
Note
It is very important to know the Elasticsearch directory layout and to understand how the configuration works.
Configuration¶
1. Elasticsearch configuration¶
Edit the following lines on /etc/elasticsearch/elasticsearch.yml:
cluster.name: CLUSTER_NAME
bootstrap.memory_lock: true
The memory lock option will prevent any Elasticsearch heap memory from being swapped out.
Warning
Setting bootstrap.memory_lock: true will make Elasticsearch try to use all the RAM configured for JVM on startup (
check next step). This can cause the application to crash if you allocate more RAM than available.
Note
A different approach is to disable swapping on your system.
Testing
After starting Elasticsearch, you can see whether this setting was applied successfully by checking the value of
mlockall in the output from this request:
curl -X GET "localhost:9200/_nodes?filter_path=**.mlockall&pretty"
2. Heap size configuration¶
For a optimized heap size, check how much RAM can be allocated by the JVM on your system. Run the following command:
java -Xms16g -Xmx16g -XX:+UseCompressedOops -XX:+PrintFlagsFinal Oops | grep Oops
Check if UseCompressedOops is true on the results and change -Xms and -Xmx to the desired value.
Note
Elasticsearch includes a bundled version of OpenJDK from the JDK maintainers. You can find it on
/usr/share/elasticsearch/jdk.
After that, change the heap size by editting the following lines on /etc/elasticsearch/jvm.options:
-Xms16g
-Xmx16g
Note
Xms and Xmx must have the same value.
Warning
Avoid allocating more than 31GB when setting your heap size, even if you have enough RAM.
3. Allow memory lock¶
Override systemd configuration by running sudo systemctl edit elasticsearch and add the following lines:
[Service]
LimitMEMLOCK=infinity
Run the following command to reload units:
sudo systemctl daemon-reload
4. Start Elasticsearch¶
Start Elasticsearch and check the logs:
sudo systemctl start elasticsearch.service
sudo less /var/log/elasticsearch/CLUSTE_NAME.log
Enable it to run at startup:
sudo systemctl enable elasticsearch.service
And finally, test the REST API:
curl -X GET "localhost:9200/?pretty"
Note
Don't forget to check if memory lock worked.
The expected result should be something like this:
{
"name": "ip-172-31-5-121",
"cluster_name": "CLUSTER_NAME",
"cluster_uuid": "FFl8DNcOQV-dVk3p1JDNMA",
"version": {
"number": "9.3.1",
"build_type": "deb",
"build_hash": "...",
"build_date": "...",
"build_snapshot": false,
"lucene_version": "10.2.1",
"minimum_wire_compatibility_version": "8.18.0",
"minimum_index_compatibility_version": "8.0.0"
},
"tagline": "You Know, for Search"
}
5. Set up minimal security¶
The Elasticsearch security features are disabled by default. To avoid security problems, we recommend enabling the security pack.
To do that, add the following line to the end of the /etc/elasticsearch/elasticsearch.yml file:
xpack.security.enabled: true
Restart Elasticsearch and set the passwords for the cluster:
sudo systemctl restart elasticsearch.service
sudo /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
Keep track of these passwords, we’ll need them again soon.
Note
You can alternatively use the interactive parameter to manually define your passwords.
Attention
The minimal security scenario is not sufficient for production mode clusters. Check the documentation for more information.
Kibana¶
Follow the detailed installation instructions on the official Kibana documentation. Return to this documentation before running it.
Info
Kibana is not started automatically after installation. We recommend running it with systemd.
Note
Like on Elasticsearch, it is very important to know the Kibana directory layout and to understand how the configuration works.
Configuration¶
1. Elasticsearch security¶
If you have enabled the security pack on Elasticsearch, you need to set up the password on Kibana. Edit the folowing
lines on the /etc/kibana/kibana.yml file:
elasticsearch.username: "kibana_system"
elasticsearch.password: "password"
2. Start Kibana¶
Start Kibana and check the logs:
sudo systemctl start kibana.service
sudo less /var/log/kibana/kibana.log
Enable it to run at startup:
sudo systemctl enable kibana.service
RabbitMQ¶
Attention
From Hyperion 4.x, RabbitMQ version 4.x+ is recommended. RabbitMQ 3.12+ is the minimum supported version.
Follow the detailed installation instructions on the official RabbitMQ documentation.
RabbitMQ should automatically start after installation. Check the documentation for more details on how to manage its service.
Configuration¶
1. Enable the WebUI¶
sudo rabbitmq-plugins enable rabbitmq_management
2. Add vhost¶
sudo rabbitmqctl add_vhost hyperion
3. Create a user and password¶
sudo rabbitmqctl add_user USER PASSWORD
4. Set the user as administrator¶
sudo rabbitmqctl set_user_tags USER administrator
5. Set the user permissions to the vhost¶
sudo rabbitmqctl set_permissions -p hyperion USER ".*" ".*" ".*"
6. Check access to the WebUI¶
Try to access RabbitMQ WebUI at http://localhost:15672 with the user and password you just created.
Redis¶
sudo apt install redis-server
Redis will also start automatically after installation.
Configuration¶
1. Update Redis supervision method¶
Change the supervised configuration from supervised no to supervised systemd on /etc/redis/redis.conf.
Note
By default, Redis binds to the localhost address. You need to edit bind in the config file if you want to
listen to other network.
2. Restart Redis¶
sudo systemctl restart redis.service
MongoDB¶
Attention
MongoDB is required starting from Hyperion 4.x. It is used for state queries (accounts, permissions, proposals, voters) and custom contract state indexing.
Follow the official MongoDB installation guide for Ubuntu.
Quick install for Ubuntu 24.04:
# Import the MongoDB public GPG key
curl -fsSL https://www.mongodb.org/static/pgp/server-8.0.asc | \
sudo gpg -o /usr/share/keyrings/mongodb-server-8.0.gpg --dearmor
# Add the MongoDB repository
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-8.0.gpg ] https://repo.mongodb.org/apt/ubuntu noble/mongodb-org/8.0 multiverse" | \
sudo tee /etc/apt/sources.list.d/mongodb-org-8.0.list
# Install MongoDB
sudo apt update
sudo apt install -y mongodb-org
Configuration¶
1. Start MongoDB¶
sudo systemctl start mongod
sudo systemctl enable mongod
2. Verify it's running¶
mongosh --eval 'db.runCommand({ ping: 1 })'
Note
By default, MongoDB runs without authentication. For production deployments, you should enable authentication and create a dedicated user. See the MongoDB Security Checklist.
Tip
The hyp-config connections init wizard will prompt you for MongoDB host, port, user, and password. If running locally with defaults, you can press ENTER to accept all defaults.
NodeJS¶
# installs fnm (Fast Node Manager)
curl -fsSL https://fnm.vercel.app/install | bash
# activate fnm
source ~/.bashrc
# download and install Node.js
fnm use --install-if-missing 22
# verifies the right Node.js version is in the environment
node -v # should print `v22.x.x`
# verifies the right npm version is in the environment
npm -v
Attention
Make sure to configure npm not to use sudo when installing global packages.
PM2¶
npm install pm2@latest -g
Configuration¶
1. Configure for system startup¶
pm2 startup
Antelope Node (EOSIO)¶
You need either Leap or Spring (Savanna consensus) to serve state history to Hyperion.
Leap (legacy)¶
wget https://github.com/AntelopeIO/leap/releases/download/v5.0.3/leap_5.0.3_amd64.deb
sudo apt install ./leap_5.0.3_amd64.deb
Spring (Savanna consensus)¶
Spring is the successor to Leap, featuring the Savanna consensus algorithm. Use Spring 1.2.2+ for production deployments.
wget https://github.com/AntelopeIO/spring/releases/download/v1.2.2/spring_1.2.2_amd64.deb
sudo apt install ./spring_1.2.2_amd64.deb
Info
Check the latest Spring releases at github.com/AntelopeIO/spring/releases
Configuration¶
Add the following configuration to the config.ini file:
state-history-dir = "state-history"
trace-history = true
chain-state-history = true
state-history-endpoint = 127.0.0.1:8080
plugin = eosio::chain_api_plugin
plugin = eosio::state_history_plugin
Spring config.ini restrictions
Spring v1.2.2+ enforces stricter separation between genesis parameters and runtime configuration.
Resource limit parameters (e.g., max-block-net-usage, max-block-cpu-usage-threshold-us) must be
defined in genesis.json, not config.ini. Placing them in config.ini will cause nodeos to crash
at startup with Unknown option.
Hyperion¶
If everything runs smoothly, it's time to install Hyperion!
To do that, simply run the following commands:
git clone https://github.com/eosrio/hyperion-history-api.git
cd hyperion-history-api
npm ci