Want to run a Fusion node on Linux to participate in staking? Here’s how.

The Fusion Foundation launched its highly anticipated mainnet on 30th June 2019 after six months of successful in-depth testing on their stagenet, allowing anyone holding a minimum of 5000 FSN to participate in staking. The details of the swap from ERC-20 FSN to native mainnet FSN are outlined here, the swap address can be found here. It’s a very straightforward procedure. This guide was originally created for stagenet, but has been updated to mainnet. You can find the official mainnet guide here. I’ll still keep updating mine with new information and additional details. If you generally want to follow the official guide, but use Google instead of Amazon, you might find my corresponding guide interesting.

Update 2019–10–17: The auto setup script (Fusion Node Manager) is now the recommended method to get a node up and running as well as performing basic maintenance tasks like node updates. If you used the original guide, you have to migrate manually until I implemented an automatic upgrade path avoiding a resync. Make sure you have a running backup node or no active tickets to avoid an offline penalty, then migrate like this:

sudo docker stop fusion
sudo rm -rf /var/lib/fusion/ /usr/local/bin/create_node.sh
sudo docker rm fusion
# This is a single command, Medium wraps it automatically!
bash -c "$(curl -fsSL   https://raw.githubusercontent.com/Iruwen/efsn/QuickNodeSetup/QuickNodeSetup/fsnNode.sh)"

Alternatively, setup a new node on a different VPS and stop the original node.

The learning curve for setting up a Fusion node is a bit steep if you aren’t a tech person and have never worked with Linux, so I’m gonna do quite a bit of explaining here to get you started.
Please always double check that you’re visiting the correct sites for all links in the article, for example by visiting the official site. Imagine if I got hacked and someone would take over my Medium account, then the attacker could easily redirect you to some phishing site. There are browser extensions to help protect you from phishing attacks, like EtherAddressLookup or MetaCert’s Cryptonite, but always be careful.

Please also read the Fusion Foundation’s articles and, if you want to, you can check out their GitHub repositories too. There’s also a lot of information in their knowledgebase.

Before you can setup a node, you need a Fusion wallet. FSN, the native base asset on the Fusion mainnet, will later be needed to buy the tickets required for staking, which works like a lottery. The minimum requirement is 5000 FSN, as that’s the price per ticket. So to stake with more than one ticket, you’ll need multiples of that, say 10000 FSN, 15000 FSN and so on. For ROI projections, see here or here for example.

Note that since Fusion is all about interoperability, you’ll also be able to just keep your ERC-20 tokens, so if you currently have FSN on IDEX for example you can safely keep trading there. In that case you just won’t be able to stake, since that requires native mainnet FSN. You can even move back from native FSN to ERC-20 FSN for a small fee.

Now if you don’t already have one, please create a new empty wallet by visiting MyFusionWallet and choosing “New Wallet”. The process should be self-explanatory. Make sure you store the public and private key, the keystore (starting with UTC — …) and the corresponding password in a safe place, like KeePass (you should really use KeePass or some other password manager).

The keystore and its password will have to be uploaded to the staking server later. At this point, people usually ask if that’s not extremely insecure (and rightfully so). Well it would be, if this wasn’t Fusion. You can choose if you want to send FSN directly to the dedicated staking wallet, or use time-locked FSN instead (which is recommended). If you don’t know what that means, please read about the time-lock feature here. Your node will be the user in this context, while you remain the owner. The minimum time-lock duration is 30 days, which is the default lifetime of a ticket. To get a full month of staking, you’ll need a time-lock of at least 60 days though because you have to be able to buy a new ticket on the last day, so it’s actually 30+30 days. You can send your staking rewards back to your main wallet (which you can keep on a Ledger or Trezor for example) every now and then to secure them. If a hacker gets access to your staking wallet, you’ll only lose your future staking rewards until the time-lock period ends. Please note that time-locked FSN can still be moved around, so they’re not locked in place, but you’ll always regain full ownership at the end of the locking period.

For the same reason, it’s safe to use staking pools if you own less than the 5000 FSN required to buy a ticket. Please note that I’m not affiliated with any of these platforms, they’re just the ones I currently know about:

Since you’re only sending time-locked FSN, they can’t run away with your funds. The worst thing that could happen is that they refuse to payout for a staking period. If you want to mitigate that risk as well, you can arrange an up-front payment of your projected staking rewards by putting a time-locked FSN swap offer on the market. At this point it probably gets confusing, but just note how excitingly powerful and unique Fusion’s technology is. You can learn more about the last option here.

This can also have important positive tax implications! For example in Germany, staking is a taxable event. With a FSN price of $10 and a 2.5 FSN block reward, you’d have to pay income tax on $25 every time one of your tickets wins. Now imagine you stake in a pool with continuous payouts and FSN’s value increases to $100, then the tax you’ll have to pay goes x10 as well, as it’s always based on the current valuation of the asset at the time the income is generated. If you get an up-front payment instead, that’s your income. You don’t have to care about future price increases or the extremely complicated tracking and calculation of staking income.

What you’ll need next is a Linux system, obviously. 2 CPU cores and 4GB of RAM as well as 100GB of storage should be sufficient to run a node.
There’s a plethora of ways to run a Linux system, but since the node has to be available 24/7, it’s a good idea to get an affordable leased physical server (also called “bare metal”) or a VPS (Virtual Private Server). The latter is usally the cheapest option by far.
Of course you can also run Linux on your personal PC or even notebook, but remember that you can’t shutdown the system then. If you do, you’ll actually be punished, see further below. If you’re normally running Windows or Mac OS, you can install Linux in a virtual machine (using Virtualbox, VMware oder Hyper-V for example). Unfortunately Docker doesn’t run natively on the Windows Subsystem for Linux (WSL).
Here’s a few examples for VPS providers (you’ll also see the term “cloud computing” or similar):

Hetzner
Vultr
Contabo
DigitalOcean
Google
– Amazon

I’m from Germany, so I picked Hetzner which I’m using for other projects anyway, so it’s free for me. A CX31 instance (~10€) should be on the safe side. I wrote a guide on how to use it which you can find here.
I checked out Vultr too and really liked their management interface, so you can definitely give it a try. They even accept Bitcoin payments!
Contabo is super cheap (~4€), but be aware that “super cheap” may have implications like “super heavily used” or “not too reliable”. People get varying results running nodes there. Their offers include quite a bit of storage by default, which is a plus.
I also heard good things about DigitalOcean, especially about their user interface, so I added them to the list. I never tried them myself though.
What’s great about Google is that they offer a free trial, where you get credits worth $300 which you can use to configure and run a VPS meeting your needs. Since it’s a very versatile and professional platform, the entry is a bit less straightforward than with the other providers, so I wrote a short guide to get you started which you can find here.
I personally don’t like Amazon because of their pricing model and user interface, but that’s just me, and if you have a professional account with them, you can even claim credits worth $1500 (scroll down a bit) from the Foundation! There’s also a link to a setup guide.

Whatever you choose, make sure you order a Linux system. It doesn’t really matter which distribution you run, although Ubuntu is probably the most common choice and I’ll focus on it here (version 18.04 to be specific). The process is similar for all distributions, but commands might differ slightly, especially when installing packages. Ubuntu is based on Debian, so instructions for these two will be almost identical.

After you ordered your server, you have to be able to access it using an SSH/SCP client with either username and password or a public and private key pair (you know the concept of public and private keys from your wallets, it’s the same basic principle). Your provider probably offers some help on this, and I also wrote an extensive article about SSH which you can find here. You can use clients like KiTTY, PuTTY or MobaXterm to access a Linux system’s shell from Windows, and WinSCP or FileZilla to transfer files. If you’re working with Linux or Mac OS, chances are you already know how to use SSH on the shell. You can also use your smartphone and a client like Termius, which I cover in my Hetzner guide.

Note that pasting copied text, which you may want to do sometimes during this guide as the longer commands are hard to remember and type, doesn’t work by pressing Ctrl + v in an SSH session for historical reasons. Use Shift + Insert instead, or move the mouse over the window and press its right button.

Long commands might be wrapped for readability. This works by putting an at the end of each line; watch out for that when you’re copying and pasting commands. If there’s a backslash at the end of a line, it’s always a single command wrapped over multiple lines. The shell interprets this correctly.

Filenames are case sensitive on Linux, as opposed to Windows, so check that you got the capitalization right in case of file/command not found errors.

A word on notation: you’ll see the terms “argument”, “option” and “parameter” in this guide. In the command tail -f -s 5 /var/log/syslog, everything after tail is an argument, -f and -s are options and the 5 is a parameter (so options and parameters are also arguments).

I’ve seen a few cases where commands containing quotation marks failed, eventually because they had been copied to some editor which turned them into typographic ones (“curly quotes”). Watch out for that, it might not be easy to spot on the shell. Use notepad++ or some other proper text editor and you’re good. Don’t use Microsoft Word or the likes.

Now you should have a running Linux system and be able to access its shell (the prompt where you enter commands). Here’s where the actual work starts. I don’t want people to copy & paste instructions blindly without understanding what they’re doing, so I’ll explain what’s actually going on with each step. I also put a tl;dr version at the end of the article though which you can use as a copy & paste template, but if you’re stuck please read the full instructions before asking. We’ll all gladly help you on Telegram, but that reduces the amount of trivial questions we get so we can focus on the complicated stuff. Reading the instructions thoroughly also makes it easier for you to figure out what’s wrong on your own, in case something is different on your side for example. Usually you’ll get an error message which literally tells you what’s wrong if you read it thoroughly!

I’ll be prepending sudo to many commands here. sudo lets you execute commands as a different user, usually root. If you’re root already this wouldn’t be necessary, but it also doesn’t hurt. Note that on Linux, root is god. It’s way more powerful than the Administrator user on Windows. root doesn’t ever ask for permission. That means you don’t have to care about file permissions and such, but also that you can make your system disappear into nirvana with a single malformed command (you may have seen the infamous #rm -rf / before, please… don’t). On a dedicated VPS running only the node, working as root doesn’t really matter from a security perspective.

Every shell command will have ~# or > in front of it throughout this article, depending on which shell you’re in, so you know which lines are commands and which ones are output. This indicator doesn’t belong to the command, so don’t copy it or you’ll get a command not found error. I’ll leave it out for the short instructions at the end of the article though.

You can kill running processes by hitting Ctrl + c. If your system is under heavy load, waiting for I/O (input/output, meaning it’s trying to read or write data somewhere) or with special processes, this might not work. Or you’re going back on the shell, but the process in question keeps running invisibly. You can always open a new shell or screen session (see below) to continue. When in doubt, simply reboot the system to get to a clean state; there’s absoutely nothing wrong with using that brute force method here.
Sometimes you’ll also see the key combination Ctrl + z. That means “move the current foreground process to the background”. I’ve been using Linux for over 15 years now and never used that once; there are specific cases where it makes sense, but it’s mostly a relict.

I’ll also use screen. Screen allows you to keep your shell active, even if your SSH client disconnects. There’s a 99% chance you want to do that. Maybe you have to install it first (apt-get install screen), otherwise just enter

~# screen -S fusion

-S gives the screen session an easy to remember name. You’ll now be in a new shell. You can check that it worked by executing

~# screen -ls
There is a screen on:
        12407.fusion     (12/31/2018 05:28:01 PM)     (Attached)

The output might look slightly different, but this means there’s a screen session running and attached. If your SSH connection drops and you have to reconnect, use

~# screen -dr

and you’ll reattach. It’ll pick up where you left before. If there are multiple sessions, you have to specify the one you want to attach to like this:

~# screen -dr fusion

You have to use its actual name of course. The -d option in both commands makes sure the session is detached before you’re trying to reattach, as you can only attach once. If you’re lazy, you can also use

~# screen -RR

which means “attempt to resume the youngest detached screen session you find”. Check out the manual page (man screen) for more options.

You can detach from an attached screen session by pressing Ctrl + a followed by the character d, then start another session. That way you can use multiple shells in a single SSH connection.
Scrolling back is a bit complicated in screen, see here how it works.

Other people prefer tmux, which is also a great tool. I’ll leave that to you.

From this point on you have two choices: run an auto setup script (the Fusion Node Manager), which automates the following steps, or enter the required commands manually. If you don’t care about the how and why, you can just run this command and have your node setup automatically:

~# bash -c "$(curl -fsSL https://raw.githubusercontent.com/Iruwen/efsn/QuickNodeSetup/QuickNodeSetup/fsnNode.sh)"

Warning: don’t ever blindly copy this command from a different site, from Telegram or whatever; always double check that it’s the correct address. The Github account is either https://github.com/FUSIONFoundation for the official Foundation’s repositories or https://github.com/Iruwen for mine. If someone subtly replaces the repository URL with a different one pointing to a malicious script, your funds are at risk! You can mitigate that to a degree by only and always keeping time-locked FSN in your staking wallet.

You can review the script code here. I forked and improved the rudimentary Foundation script by adding lots of checks and features; unfortunately my pull request to make it the official version has been pending for months now without any communication for whatever reason, including some fixes to the node itself. Note that the script only supports Ubuntu 18.04 LTS at the moment. I’m constantly improving it, so if you want to see additional features, just ask and I’ll see what I can do.

If you want to know what’s going on or need further customizations, read on from here and complete the setup the manual way. Note that I’m using /var/lib/fusion as the base directory for the setup throughout the rest of this guide, while the setup script relies on /home/$USER and a special configuration file to keep it compatible with the Foundation’s version, so you can’t just mix the two methods.

First update the distribution’s package sources and install Docker. Docker is an application container platform. A container is similar to a virtual machine, but more lightweight. It bundles everything required to run a node in an image and keeps it isolated from the rest of the system. On Ubuntu, update and installation is super easy:

~# sudo apt-get update
~# sudo apt-get install docker.io

On other distributions, additional steps might be required. Fortunately, the Docker documentation is very good. Look up the procedure for the distribution you’re running, it’s only a few steps in any case.

The container only contains static data, so it needs some place where it can write things like databases. You have to create a directory for that. I like to do stuff the Linux way, so I’m using /var/lib/fusion. You’re free to use any other location of course, but make sure you’re using it consistently in all steps and that you use directory on a volume/partition with sufficient space.

~# sudo mkdir /var/lib/fusion

This is where your keystore and corresponding password.txt (see below) have to be placed. The keystore’s filename (not content) looks like this:

UTC--2018-12-29T00-19-08.227Z--0x0000000000000000000000000000000000000000

You downloaded it when you created your Fusion wallet. If you have to recover the keystore for your wallet, you can do so by downloading the MyCrypto desktop app. Login with your private key and you’ll find an option to generate a new keystore (this works because Fusion and Ethereum are compatible). Don’t rename the file, or the node won’t recognize it. Best don’t mess with the file at all, simply upload it to the server using a method of your choice, to the directory you just created. The password for the keystore has to be saved in a file called password.txt. It must contain nothing besides the password. You can create it locally and upload it the same way you did with the keystore, or do this:

~# sudo sudo apt-get install nano
~# sudo nano /var/lib/fusion/password.txt

nano is a commandline text editor. vi is probably already installed, but nano is easier to use. You can start typing right away and only have to press Ctrl +x to exit and save. Your directory should now look like this:

~# sudo tree /var/lib/fusion/
/var/lib/fusion/
├── password.txt
└── UTC--2018-12-29T00-19-08.227Z--0x0000000000000000000000000000000000000000
0 directories, 2 files

tree simply visualizes a directory structure (might also not be installed by default on your distribution).

It’s time to create the Docker container now. Put that command in a script, so you don’t always have to type it again. You can create it wherever you want, in your home directory or in /usr/local/bin for example.

~# sudo nano /usr/local/bin/create_node.sh

Paste the following lines into your editor:

#!/bin/sh
docker stop fusion >/dev/null 2>&1
docker rm fusion >/dev/null 2>&1
docker rmi fusionnetwork/minerandlocalgateway >/dev/null 2>&1
docker create --name fusion -t --restart unless-stopped 
-p 40408:40408/tcp -p 40408:40408/udp -p 127.0.0.1:9001:9001/tcp 
-v /var/lib/fusion:/fusion-node 
   fusionnetwork/minerandlocalgateway 
-u '0x0000000000000000000000000000000000000000' 
-e 'SuperAwesomeFusionNode' -a

This probably needs some explaining:

1) #!/bin/sh is called a shebang and tells the system that this is a script which should be interpreted by the sh shell.
2) docker stop stops an existing container, if there is one.
3) docker rm removes that container, so you can reuse its name.
4) docker rmi removes the image the container was created from, to make sure you definitely use the most recent one.

>/dev/null 2>&1 generally sends a command’s output to nowhere, because you really don’t have to care if any errors occur during this preparation.

5) docker create creates a new container by downloading the image containing all the stuff your node needs to work and configuring it intially.
6) — name assigns a fixed name to this container. This makes it easier to stop, update and start it later. If you use something different here you’ll have to change it everywhere throughout this guide, so better stick with it for now.
7) –t attaches a so called pseudo TTY to the container, making it more intuitive for you to control.
8) — restart unless-stopped restarts the container if it exits (e.g. when it crashes). You can also try on-failure instead of unless-stopped, but on-failure depends on a proper non-zero exit code so I’m playing safe here.
9) -p publishes a port, so it’s reachable from the outside (meaning outside the container, not necessarily the internet). The node uses port 8001 for both TCP and UDP to communicate with other nodes and port 9001 TCP for its gateway functionality. The latter should not be reachable from the internet.
10) -v /var/lib/fusion:/fusion-node mounts the directory /var/lib/fusion into the container as a directory called /fusion-node. Remember that the container only contains static data and isolates the node from the rest of the system? You’re breaking that up here, so the node can read from and write to your disk. Change this if you used a different directory.
11) fusionnetwork/minerandlocalgateway is the online repository you’re pulling the image from. It’ll automatically use the latest version.
12) -u ‘0x0000…’ tells the node which wallet to unlock. Obviously, this has to be changed. It’s the same address that appears in your keystore’s filename.
13) -e ‘SuperAwesomeFusionNode’ specifies the publicly visible node’s name. It will be shown in the Network Monitor and can contain UTF-8 characters, so you can be creative here.
14) -a enables the auto buy ticket feature, so you can participate in staking.

Options to Docker always have to go before the image being used to create the container (fusionnetwork/minerandlocalgateway in this case), or they would be used as arguments to something running inside the container:
docker create [OPTIONS] IMAGE [COMMAND] [ARG…]
I really recommend to internalize this kind of notation, it’ll help you a lot when you’re working with shell commands. Almost every command’s usage instructions will follow it.

Save the file and make it executable:

~# sudo chmod +x /usr/local/bin/create_node.sh

You can now run the script like this:

~# sudo /usr/local/bin/create_node.sh

You should then see output similar to this:

Unable to find image 'fusionnetwork/minerandlocalgateway:latest' locally
latest: Pulling from fusionnetwork/minerandlocalgateway
cd784148e348: Pull complete
aae2b9957249: Pull complete
135bacd7005e: Pull complete
2f0dd2c3b0f8: Pull complete
69e47c3d05cd: Pull complete
Digest: sha256:31d6602c2a09b2b298563d8c99167edb163d5e0439b2d0802897736a9035c0c0
Status: Downloaded newer image for fusionnetwork/minerandlocalgateway:latest
ad762cc9b7ae458b952880efd878d016f2d8f7cdc25e21326ae03f51673c5648

This means Docker pulled the latest Fusion image from the repository and created a new container from it. You can check that it worked like this:

~# sudo docker ps -a

This should show a fresh container named fusion.

The create_node.sh script is idempotent, meaning you can run it multiple times and will always get the same result and no unneeded data. You’ll also use it to update the node in the future!

You are now ready to actually start the node by starting its container:

~# sudo docker start -a fusion

The -a option means attach, otherwise the container would be started in the background and you couldn’t see what the node outputs to the shell. You’ll see output similar to this if everything worked correctly:

flags: --datadir /fusion-node/data --password /fusion-node/password.txt --mine --ethstats SuperAwesomeFusionNode:[email protected] --unlock 0x0000000000000000000000000000000000000000 --autobt --rpc --ws --rpcaddr 0.0.0.0 --rpccorsdomain 0.0.0.0  --wsapi "eth,net,fsn,fsntx" --rpcapi "eth,net,fsn,fsntx" --wsaddr 0.0.0.0 --wsport 9001 --wsorigins=* --rpcport 9000
INFO [03-28|13:20:16.935] Maximum peer count                       ETH=25 LES=0 total=25
INFO [03-28|13:20:16.937] Starting peer-to-peer node               instance=Efsn/v1.8.16005-stable-0b8a15fa/linux-amd64/go1.12.4
INFO [03-28|13:20:16.937] Allocated cache and file handles         database=/fusion-node/data/efsn/chaindata cache=768 handles=1024
INFO [03-28|13:20:16.954] Writing default main-net genesis block
INFO [03-28|13:20:17.863] Persisted trie from memory database      nodes=18616 size=3.35mB time=182.031614ms gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [03-28|13:20:17.864] Initialised chain configuration          config="{ChainID: 88666 Homestead: 0 DAO: 0 DAOSupport: false EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: <nil> Engine: datong}"
INFO [03-28|13:20:17.864] Initialising Ethereum protocol           versions="[63 62]" network=55
INFO [03-28|13:20:17.864] Loaded most recent local header          number=0 hash=fd2311…0e69a1 td=1 age=4w1d18h
INFO [03-28|13:20:17.864] Loaded most recent local full block      number=0 hash=fd2311…0e69a1 td=1 age=4w1d18h
INFO [03-28|13:20:17.864] Loaded most recent local fast block      number=0 hash=fd2311…0e69a1 td=1 age=4w1d18h
INFO [03-28|13:20:17.865] Regenerated local transaction journal    transactions=0 accounts=0
INFO [03-28|13:20:17.865] Starting P2P networking
INFO [03-28|13:20:19.975] UDP listener up                          self=enode://[email protected][::]:40408
INFO [03-28|13:20:19.976] Stats daemon started
INFO [03-28|13:20:19.977] RLPx listener up                         self=enode://[email protected][::]:40408
INFO [03-28|13:20:19.982] IPC endpoint opened                      url=/fusion-node/data/efsn.ipc
INFO [03-28|13:20:19.983] HTTP endpoint opened                     url=http://0.0.0.0:9000        cors=0.0.0.0 vhosts=localhost
INFO [03-28|13:20:19.984] WebSocket endpoint opened                url=ws://[::]:9001
INFO [03-28|13:20:20.043] Unlocked account                         address=0x0000000000000000000000000000000000000000
INFO [03-28|13:20:20.043] Transaction pool price threshold updated price=1000000000
INFO [03-28|13:20:20.043] Transaction pool price threshold updated price=1000000000
INFO [03-28|13:20:20.043] Etherbase automatically configured       address=0x0000000000000000000000000000000000000000

This means the node launched and registered with the Network Monitor, started the peer to peer (P2P) communication with other nodes on the network and unlocked your wallet (account).

If you see this error when the container starts, don’t worry yet:

cp: can’t stat ‘/fusion-node/UTC*’: No such file or directory

This doesn’t come from the node itself but from the script that starts it, which first tries to copy the keystore to the directory where the node expects it:

cp /fusion-node/UTC* /fusion-node/data/keystore/

If the keystore is in /fusion-node/data/keystore/ already, it’ll still work.

Sometimes your node might crash, like any software. The container will automatically restart in that case, but it won’t reattach to your shell on its own, so it will look like it’s gone for good because you don’t see its output anymore. You’ll face the same problem after you open a new SSH session (and didn’t use screen). The container will still be active in the background though, which you can check by entering docker ps. To see what it’s doing again, use this command:

~# docker logs fusion --tail=25 -f

This means “show the container’s output, but only the last 25 lines, and then follow up on everything that’s coming after”.

Your container will automatically restart when your node crashes, but not if your system goes down or if you reboot intentionally, for example because you installed kernel updates or had to rescale your VPS or resize your storage volume. To make it start automatically at boot time, you have to create a new systemd service. systemd is the system and service manager which takes care of starting, stopping and monitoring almost all processes running on your system; it’s sitting at the very heart of it.

~# ps 1
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:03 /sbin/init

PID 1 is the one to rule them all, the first userspace program with the lowest process ID, started directly after the kernel image finished booting.

First create a new systemd service file:

~# sudo nano /etc/systemd/system/fusion.service

Then paste the following lines into it:

[Unit]
Description=Fusion Node
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
ExecStart=/usr/bin/docker start fusion
ExecStop=/usr/bin/docker stop fusion
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

I won’t make it too detailed here, if you want to know what each line means, check out the systemd service documentation.

Save the file, then run the following commands to load and enable the new service definition:

~# systemctl daemon-reload
~# systemctl enable fusion

This doesn’t actually start the container, it only loads the new config and tells systemd to do so automatically at boot time. Theoretically you can also use systemd’s commands to start and stop it right now:

~# systemctl start fusion
~# systemctl stop fusion

You can also stick with the docker start/stop commands you already know though. systemd is really powerful, but we’re intentionally using the simplest possible approach here where it doesn’t really care about our container after it was started, because Docker already handles restarts after crashes and such. use docker ps and docker logs as previously explained to see if the node actually started after reboot and see its output.

A few words about that -p option you used when you created the container above: Docker exposes a few ports by default, according to the container’s configuration, but it doesn’t publish them. This means they’re not reachable from the internet via your public IP address. From the docs:

The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.

Now -p 40408:40408/tcp -p 40408:40408/udp basically means “map the container’s port 40408 to the system’s port 40408, and do it for both TCP and UDP”. These ports are used for discovery (UDP, finding other nodes) and general communication (TCP, syncing the blockchain and other stuff).
-p 127.0.0.1:9001:9001/tcp on the other hand means “map the container’s port 9001 to the system’s port 9001, but only on the so called loopback interface, so it’s not reachable from the internet, and for TCP only”. That port is used to access the node’s Websocket API.

Do not add any other ports here, and don’t change these rules without good reasons. That would be dangerous and could lead to a loss of funds!

You can check your published ports like this (I left out some of the output for better readability):

~# docker ps
CONTAINER ID ... PORTS
43d07fa39f5d ... 9000/tcp, 9000-9001/udp, 40407/tcp, 40407/udp, 127.0.0.1:9001->9001/tcp, 0.0.0.0:40408->40408/tcp, 0.0.0.0:40408->40408/udp

You see all exposed ports (not reachable from the outside), as well as your published ports 40408 und 9001 (marked bold, reachable from the outside, recognizable by the -> indicator).

To see if it actually worked, you can go to this site and enter your node’s public IP address and port 40408. This only works for TCP ports, because UDP is connectionless and doesn’t really “listen”. Your public IP address is the one you used to connect to your server via SSH, you can get it from your provider’s management interface or by running this command:

~# curl -s https://api.ipify.org

The port check should return “open”, otherwise something is still wrong. If you do the same for port 9001 and it’s open, you messed up somehow. In that case, please stop the container and check your configuration.

Docker automatically adds a firewall exception when publishing ports, so if a port isn’t reachable, it’s often an external issue. Run the following commands to check that everything is fine internally first, the output should look similar:

~# netstat -nlp | grep '40408.*LISTEN'
tcp6    0    0    :::40408    :::*    LISTEN    9158/docker-proxy
~# iptables -L | grep 40408
ACCEPT    tcp    --    anywhere    172.17.0.2    tcp dpt:40408
ACCEPT    udp    --    anywhere    172.17.0.2    udp dpt:40408

Google Cloud servers for example have an additional security layer which is enabled by default. Check out their documentation on how to open ports, or look here for easier instructions. Other providers might have similar mechanisms in place.

Your node will now try to find other nodes (peers, sometimes you might also see the term dials), so it can sync the latest blockchain data and participate in the network. After some time, you should start seeing output similar to this:

INFO [04-05|15:22:33.296] Imported new chain segment blocks=1 txs=1 mgas=0.022 elapsed=2.045s mgasps=0.011 number=45100 hash=4e8a7b…4ad158 cache=0.00B difficulty=2451 miner=0x0000000000000000000000000000000000000000 end.root=b1f3d7…73f0b9 parentHash=b37ceb…c95689 time=1554477738 cache=0.00B

This means your node is working properly and started importing blocks. Congratulations, if you made it to this point, you already have a running Fusion node!

You’ll also have to watch and maintain your node though, like checking its resource usage or installing updates. Remember the name you gave the docker container with the docker create command? That’ll come in handy now. To see information about your running containers, enter

~# sudo docker ps

You’ll see the ID and name of the container you created earlier, among other things. If you hadn’t specified a name, Docker would create a random one which you’d have to lookup every time. You’ll see that it’s simply fusion now. To get some info about what your node is doing inside the container, enter

~# sudo docker stats

This tells you how much it stresses out your CPU, how much RAM it’s using and the amount of network traffic and disk I/O it generates.

To stop the container, you can simply use

~# sudo docker stop fusion

It’ll take a few moments before the container actually stops. This is slower but preferable to the docker kill command you’ll also sometimes see, because everything that’s running in the container (the node in this case) has a fair chance to exit to a clean state that way. The node constantly writes database files to disk for example, which might be interrupted by killing it relentlessly. You can use docker ps again to check if the container is really stopped; it should return an empty list in that case.

To update the node, simply run the create_node.sh script again:

~# sudo /usr/local/bin/create_node.sh

Then you can start the container:

~# sudo docker start -a fusion

Please note that a simple docker pull like this is not sufficient:

~# sudo docker pull fusionnetwork/minerandlocalgateway

This only pulls a new base image, but doesn’t overwrite the container you already created from it. Always run create_node.sh to update the node.

After you’ve experimented with Docker for some time, there may be a bit of unused data remaining on your disk. To clean that up, you can run

~# sudo docker system prune

This only removes dangling (orphaned) images, not dynamic data like the synced blockchain, so you don’t have to start syncing from scratch.

If you want to see what the node did while you haven’t been watching, when it crashed for example, you can use

~# sudo docker logs fusion

This will potentially throw a very, very long wall of text at you (stop it with Ctrl + c). To make it more useable, you can limit its output like this:

~# sudo docker logs fusion --since=5m
~# sudo docker logs fusion --tail=100

The first command shows what happened in the last five minutes, the second one returns the last 100 lines. By adding -f you can follow the most recent output, which will then continuously be printed to the console.

Update the Linux system itself like this, the procedure is specific to Ubuntu (and Debian) though and will vary for other distributions:

~# apt-get update
~# apt-get dist-upgrade

Sometimes it might be necessary to purge the synced blockchain data, for example if the database becomes corrupted for some reason. In this case, stop the container as described above, and then simply remove the chaindata directory altogether:

~# sudo rm -rf /var/lib/fusion/data/efsn/chaindata/

If you ever see the efsn removedb command and wonder what that does: exactly the same, only more complicated.

Similarly, you can purge all information about other nodes you’ve seen:

~# sudo rm -rf /var/lib/fusion/data/efsn/nodes/

That should generally not be necessary though.

Afterwards the container can simply be started again, the node will take care of everything by itself and recreate the required directories.

Note that if you remove the /var/lib/fusion/data/efsn/ directory altogether, your node ID (the unique public key it’s identified by) and thus enode URL will change, because that is derived from a file called nodekey which is also stored in that tree. This is especially relevant if you added nodes as static or trusted peers (see part 2). You can do that on purpose to start with a completely fresh and “unseen” node though.

Always be very careful when you’re removing directories as root; remember that you can purge the whole system that way.

A final note: Docker comes with a very good help system. You can easily get extensive help for all of its commands at any time by looking at its man pages (manual pages), which are one standard way of documenting things in Linux. To show a list of all existing commands, do this first:

~# docker help

You’ll see a pretty long list now, some of the commands you already know, most you’ll probably never need. If you want to know what exactly a command does and how it’s used, you can read its man page like this:

~# man docker create

man pages exist for almost every Linux command, not only Docker.
You can return back to the shell at any time by pressing q.

In part 2 we’ll change some advanced settings, compile from the source code, see if we can run more than one node on a single server and generally take a closer look at its inner workings and the way it communicates in a truly decentralized network.

Here’s an Ubuntu specific tl;dr version skipping all the explanations. Please read the whole article before asking on Telegram if this doesn’t work for you. I’m leaving out the ~# here and also added some options disabling interactive prompts, so you can more easily copy and paste the commands. The lines starting with # are comments, so they will be ignored if you accidentally paste them to the shell. They describe what you have to do manually in between. Some commands are wrapped for readability by putting an at the end of each line.

To setup the node:

sudo apt-get update
sudo apt-get -y install docker.io nano
sudo mkdir -p /var/lib/fusion
# copy the UTC-- keystore to /var/lib/fusion
sudo nano /var/lib/fusion/password.txt
# enter the keystore password and save the file
sudo nano /usr/local/bin/create_node.sh
# paste the following lines into the file:
#!/bin/sh
docker stop fusion >/dev/null 2>&1
docker rm fusion >/dev/null 2>&1
docker rmi fusionnetwork/minerandlocalgateway >/dev/null 2>&1
docker create --name fusion -t --restart unless-stopped 
-p 40408:40408/tcp -p 40408:40408/udp -p 127.0.0.1:9001:9001/tcp 
-v /var/lib/fusion:/fusion-node 
   fusionnetwork/minerandlocalgateway 
-u '0x0000000000000000000000000000000000000000' 
-e 'SuperAwesomeFusionNode' -a
# adjust -u and -e to your needs and save the file
sudo chmod +x /usr/local/bin/create_node.sh
sudo /usr/local/bin/create_node.sh
sudo docker start -a fusion

To make it start at boot time:

sudo nano /etc/systemd/system/fusion.service
# paste the following lines into the file:
[Unit]
Description=Fusion Node
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
ExecStart=/usr/bin/docker start fusion
ExecStop=/usr/bin/docker stop fusion
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
# save the file and execute the following commands
systemctl daemon-reload
systemctl enable fusion

To update the node:

sudo nano /usr/local/bin/create_node.sh
# make sure that the file contains exactly the same
# commands as shown in the setup instructions above
# you can press Ctrl + k to remove lines / clear it
sudo /usr/local/bin/create_node.sh
sudo docker start -a fusion

If you need additional help, please join the official Telegram groups:
https://t.me/FUSIONFoundation for general Fusion discussions
https://t.me/FsnDevCommunity for development and node info

I’m also running a German-speaking group: https://t.me/FUSION_German

Become part of a friendly and helpful community, we’ll happily guide you!

Also follow these announcement channels for updates:
https://t.me/fusionannouncements
https://t.me/fusiondevelopersannouncement

My Telegram username is @Iruwen, you can find me in the above groups as well as in CoinMarketCap’s great international community where we discuss general crypto topics. I’ll gladly try to help you out with technical questions of all sorts.

If you think this was helpful in any way, feel free to buy me a beer!

FSN: 0x0afAB9b6dA9FBb79f3260F71E4a17d4AF9AC1020
ETH: 0x0afAB9b6dA9FBb79f3260F71E4a17d4AF9AC1020
BTC: 16yAtsdjzEaQbH8ucK1nbtkqrpo791EZ7a