$ xmrhost-cli notes show --slug=lokinet-exits-and-oxen-staking
[$ ] note: lokinet-exits-and-oxen-staking
// Lokinet exits and Oxen staking — the operational guide for service-node operators
// 2026-04-26 · diff=advanced · read=17min · tags=[lokinet, oxen, vps, exit, staking] · by=0xLambda
// ABSTRACT
abstract
Lokinet differs from Tor and I2P in one structural way: every routing node also stakes Oxen. Operating a Lokinet exit therefore is simultaneously a hosting decision, a staking decision, and a uptime-SLA decision — drop the staking, drop the node. This note covers the wallet workflow for the staking transaction, the `oxenmq` configuration for the service node, the exit-mode pieces specific to Lokinet (vs being a regular routing-only service node), and the operational discipline that keeps the stake intact.
What “service node” means in the Oxen context
The Oxen network is a Monero fork that replaces Monero’s pure-PoW consensus with a hybrid PoW + service-node layer. A service node is a special-status node that stakes a fixed amount of OXEN (currently 15,000 OXEN as of the most recent staking-requirement vote) and, in exchange, runs the Oxen daemon, the oxen-storage-server (used by Session for offline message delivery), and the Lokinet routing daemon. The staked OXEN earns a per-block reward that the operator can either compound or withdraw. [MRL-0011]The Oxen whitepaper, §4 — service-node selection mechanism
A service node that’s healthy gets paid; a service node that fails its uptime checks for too many consecutive blocks gets deregistered, which means: staked OXEN is locked in a 30-day timelock before it can be withdrawn, the node’s slot in the swarm is rotated out, and the operator has to start over from scratch (and re-stake) to come back online.
This note is about running such a service node, with the Lokinet exit role enabled, on an offshore VPS. It is opinionated about the operational discipline because the financial stake makes the discipline non-optional.
Two things to be clear about before procurement
One: the brand sells lokinet-exit plans (the brand spec §3.1) but the OXEN stake is the operator’s, not ours. We provide the hardware, the network connectivity, and the configuration shape; the operator brings their own wallet, their own keys, and their own staking transaction. There is no custodial-staking option on this brand. If you don’t already hold the OXEN required for the stake, the brand’s /payments page covers the Monero-to-OXEN bridging discipline (short version: don’t use a centralised exchange; use a non-KYC swap service after reading the swap-service threat model in our /notes on that topic).
Two: running an exit-enabled service node is a meaningfully higher-attention workload than running a routing-only one. Exits carry clearnet traffic on behalf of Lokinet clients; the exit IP is the apparent originator from the perspective of upstream services. The abuse-handling mailbox discipline from the Tor relay note applies here verbatim — same abuse@ mailbox shape, same 24-hour response SLA, same notice page on port 80.
Step 1 — install the Oxen stack
The Oxen team publishes apt repos for Debian/Ubuntu. Install the keyring, add the repo, install oxen-service-node (which pulls in oxend, oxen-storage-server, and lokinet-bin):
# Install the Oxen apt-source.
curl -so /etc/apt/trusted.gpg.d/oxen.gpg https://deb.oxen.io/pub.gpg
echo "deb https://deb.oxen.io bookworm main" > /etc/apt/sources.list.d/oxen.list
apt update
# oxen-service-node pulls everything needed: oxend, lokinet, storage-server.
apt install -y oxen-service-node lokinet-bin
Verify versions:
Oxen 'Ottersec' (v10.4.2) Built from commit a3f2b8c on 2026-03-14
lokinet-0.9.13
Step 2 — let oxend sync
Before any service-node operation makes sense, oxend has to sync the Oxen blockchain. On a fresh box this is several hours (the chain is small relative to Bitcoin or Monero, but still 6-15 GB depending on prune mode). Start the daemon and tail the log:
oxend[3127]: 2026-04-26 14:12:33.451 I Sync data returned a new top block candidate oxend[3127]: 2026-04-26 14:12:33.451 I Synced 1234567/1234890 (99.9%) oxend[3127]: 2026-04-26 14:14:01.221 I SYNCHRONIZED OK
The “SYNCHRONIZED OK” line is the milestone. While syncing, oxend is using a non-trivial amount of disk I/O; on a constrained VPS, expect the box to feel sluggish for the first sync.
Step 3 — generate the service-node key
Once oxend is synced, generate the service-node identity. This is a deterministic operation: the resulting public key is what the operator will see on the Oxen block-explorer for the rest of the node’s lifetime. Don’t generate it twice — if you do, you have two distinct nodes from the network’s perspective, even if one is decommissioned.
Service node public key: a72fbe0ce19c5ac2e0a4e6c0e3b1b7d2f5a91c3e8d1b6f4a9c2d5e8b3a7f1c4e Service node ed25519 public key: 4f3a8b7c2e1d5f9a6b8c0d2e4f6a8b1c3d5e7f9a2b4c6d8e0f1a3b5c7d9e1f2a Service node x25519 public key: 1e5a9c2d6f8b3e7a1c4d8e2f5b9a6c3d7e1f4a8b2c5d9e3f7a1b4c8d2e5f9a3b
Record all three keys. The first (the SN public key) is what the staking transaction targets; the others are used by the storage server and lokinet for transport-layer identity.
Step 4 — the staking transaction (operator’s wallet, not ours)
The staking transaction is a regular Oxen transaction with a service-node-registration extra field. The operator’s wallet (we recommend the CLI wallet oxen-wallet-cli, not the GUI, for staking) constructs the transaction; the SN’s oxend validates the registration when the transaction confirms.
From the operator’s wallet host (NOT the service node’s box — the wallet stays on the operator’s local machine):
Inside the wallet REPL:
[wallet]: register_service_node 15000 \
a72fbe0ce19c5ac2e0a4e6c0e3b1b7d2f5a91c3e8d1b6f4a9c2d5e8b3a7f1c4e \
auto
Confirm registration with the following details:
Stake: 15000 OXEN
Service node pubkey: a72fbe0ce19c5ac2e0a4e6c0e3b1b7d2f5a91c3e8d1b6f4a9c2d5e8b3a7f1c4e
Operator cut: 100%
Confirm? (Y/n): Y
Transaction sent. tx hash: 3f8a2b9c1e5d7a4f8b3c6e9a2d5f1b8c4e7a3d6f9b2c5e8a1d4f7b3c6e9a2d5f
After ~10 confirmations, the SN’s oxend will report the node as registered:
Service node registered: Public key: a72fbe0ce19c5ac2e0a4e6c0e3b1b7d2f5a91c3e8d1b6f4a9c2d5e8b3a7f1c4e Registration height: 1234890 Last reward block: - Decommission count: 0 Uptime proof: every 60s, last 12s ago
The node now starts earning per-block rewards (proportional to its position in the rotation queue). Reward intervals are typically every few hours for a fresh node, evening out over time.
Step 5 — Lokinet exit-mode configuration
The default Lokinet routing daemon runs in router-only mode — it carries other peers’ tunnels but doesn’t act as an exit. Promoting to exit mode is a single config flag plus an exit-policy file.
Edit /var/lib/lokinet/lokinet.ini:
# /var/lib/lokinet/lokinet.ini — Lokinet, exit-enabled.
[router]
# Standard router fields auto-populated by the install. Don't touch these.
nickname = xmrhostLokinet01
public-port = 1090
public-ip = 198.51.100.42
[network]
# enable exit mode: this router will accept exit traffic from other peers.
exit = true
# Exit auth: optional shared-secret to gate access to the exit. Leave blank
# for an open exit. Most operators leave it open and let the abuse-handling
# discipline do the gating instead.
exit-auth =
# Per-IP rate limit (KBytes/sec) — keeps a single client from saturating.
ifaddr = 10.105.0.1/16
[bootstrap]
add-node = /var/lib/lokinet/bootstrap.signed
[api]
# RPC endpoint, bound to localhost. Reach via SSH tunnel for `lokinet-vpn` etc.
enabled = true
bind = 127.0.0.1:1190
[lokid]
# Lokinet talks to oxend via a unix socket — auto-configured by the package.
rpc = ipc:///var/lib/oxen/oxend.sock
The exit policy is in a separate file, /var/lib/lokinet/exit-policy.toml. The recommended default is the same reduced-exit policy shape used by Tor — accept HTTPS, IMAPS, XMPP, and the well-behaved long-tail; reject SMTP and the spam-relay-bait ports:
# /var/lib/lokinet/exit-policy.toml
# Reduced exit policy mirroring the Tor Project's recommended set.
# Editing this is generally a mistake — start from this set, narrow only.
[default]
action = "deny"
[[allow]]
ports = [80, 443, 53, 110, 143, 194, 220, 443, 465, 587, 993, 995, 5222, 5223, 5269, 6660, 6667, 6697, 8332, 8333, 9418, 11371]
description = "HTTP/HTTPS, DNS, mail-retrieval, XMPP, IRC, Bitcoin RPC, git, OpenPGP HKP"
[[deny]]
ports = [25, 465, 587]
description = "All SMTP-related — high abuse-complaint volume"
[[deny]]
ports = [135, 137, 138, 139, 445, 1433, 1434, 3306, 3389, 5432]
description = "Windows / SQL / RDP — almost always scanning, rarely legitimate"
Restart Lokinet and verify:
lokinet[4881]: [INFO] Lokinet 0.9.13 starting lokinet[4881]: [INFO] Loaded bootstrap router: 5 peers lokinet[4881]: [INFO] Exit mode: enabled lokinet[4881]: [INFO] Exit policy loaded: 23 allow rules, 9 deny rules lokinet[4881]: [INFO] Published exit endpoint: xmrhost-exit.loki
Step 6 — uptime proofs and the deregistration timer
Service nodes submit uptime proofs every 60 seconds. The proof is a small message that says “I am alive, my SN keys still match my registration, and my Lokinet/storage-server endpoints are reachable”. Quorum nodes (a rotating subset of other SNs) verify the proofs; if too many proofs are missed in a row, the node enters decommissioned state, and after a further interval it is deregistered.
The exact thresholds shift with consensus updates, but the rule of thumb is: a node missing more than ~2 hours of uptime proofs in a single 24-hour window is at risk of decommission. The implication is that maintenance windows must be kept short, and any operation that takes the node offline (kernel patching, hardware migration, network reconfigurations) needs to be planned in under-30-minute windows.
Monitor the uptime-proof submission rate as a Prometheus counter:
# Alert if the per-minute uptime-proof submission rate drops below 0.9
# over a 10-minute window — this catches partial RPC failures before they
# cascade into a decommission.
rate(oxen_service_node_uptime_proofs_submitted_total[10m]) < 0.9
Step 7 — the abuse-mailbox shape (same as Tor exits, restated)
Exit traffic from a Lokinet exit looks identical to Tor exit traffic from the perspective of upstream abuse-detection systems. The same mailbox discipline applies:
- The exit IP’s PTR resolves to
lokinet-exit-NN.<your-brand-domain>. - Port 80 on the exit IP serves a notice page that explains the IP belongs to a Lokinet exit, links to the Lokinet project, and gives the exit policy in plain text.
- Abuse complaints route via the operator’s
/contactform (topic=abuse) and are read within 24 hours; the form acknowledges receipt immediately with the notice-page content. - The notice page includes the exit policy verbatim — most automated abuse pipelines will close the ticket once they see “port 25 is in the deny list”.
Step 8 — what to monitor
The minimum-viable monitoring set for a Lokinet exit-enabled service node:
oxenddaemon up.lokinetdaemon up.oxen-storage-serverdaemon up.- Uptime-proof submission rate ≥ 0.9/minute over a rolling 10-minute window.
- Decommission count = 0.
- Block-reward count incrementing on the expected interval (catches RPC-level issues).
- Network: outbound bandwidth not pinned to the AUP cap (catches exit-saturating clients).
The brand’s /docs/oxen-service-node-monitoring entry covers the Prometheus exporter shape in detail; what matters here is that the four uptime metrics above need pager-grade alerts, not email-only alerts. A decommission you find out about the morning after has already cost a fraction of the stake’s value in missed rewards; a deregistration you find out about a week after has cost the entire 30-day timelock.
Closing — the staking discipline is the work
The Lokinet operational picture is straightforward to describe and structurally demanding to maintain. The bandwidth, the configuration, and the abuse-handling are all standard offshore-VPS work; the differentiator is the staking discipline — the 24/7 attention requirement that comes from having capital locked in the node. The operators who succeed at this run service nodes the way other people run Bitcoin mining rigs: an explicit on-call rotation, an explicit maintenance-window calendar, and an explicit budget for the hardware-uptime SLA. [Oxen network metrics + service-node lifecycle docs]
If that sounds like more than you want to commit to, the routing-only Lokinet plan is a strictly easier deployment with the same hardware shape and no staking exposure. The brand’s /node/lokinet-exit page covers both variants.
// END OF NOTE
$ cd /notes # back to the listing