$ pwd
[$ ] I2P Node 1 — I2P routing-node hosting (Iceland, Romania, Monero)
// NAME
i2p-1 — Non-exit I2P router or floodfill node with monitoring and bandwidth ledger.
// SYNOPSIS
xmrhost-cli provision --plan=i2p-1 --region=<is|ro> // SPEC
$ xmrhost-cli spec --plan=i2p-1
// NOTES
- Floodfill-eligible (≥ 128 KBps shared bandwidth)
- Reseeder participation opt-in
- No exit-tunnel role — non-exit only at this tier
// REGIONS
$ xmrhost-cli regions --plan=i2p-1
// ORDER
Order I2P Node 1
// no-kyc crypto billing (xmr recommended; btc / ltn / ltc / eth / usdt accepted) — why-monero covers the rationale, payments the flow.
// DESCRIPTION
I2P routing-node hosting
i2pd preinstalled, tuned for floodfill eligibility (≥ 128 KBps shared bandwidth) with a bandwidth-ledger dashboard. Non-exit only at this tier — the network needs more honest-floodfill operators more than it needs another opt-in exit.
// pre-configured defaults
- i2pd built from upstream, statically linked, run as unprivileged i2pd user
- Configured for floodfill role by default (toggleable via /etc/i2pd/i2pd.conf)
- Reseeder participation opt-in (default: off — opt in via the console)
- Bandwidth ledger persisted to disk; rolling 30-day burndown chart
- Java I2P legacy template available as an alternative OS
- No exit-tunnel role — explicit by config, not by accident
I2P's threat model is closer to a Tor middle relay than a Tor exit. Running a stable, well-bandwidthed floodfill is one of the most useful single contributions you can make to the network — and the Romania / Iceland jurisdictions are friendlier to that role than most of the EU heartland.
// see also
// PROVISIONING
after you click order
$ xmrhost-cli provision --plan=i2p-1 --region=is
[ok] reserving capacity in region=is
[ok] node allocated: i2p-1-is-30
[ok] applying hardened-by-default profile (sshd, fail2ban, unattended-upgrades)
[ok] starting i2pd, registering as floodfill candidate
[ok] handoff key sealed → view via the console at /console
provisioned in 47s. ssh access via onion-auth or wireguard, your choice. // you receive the onion-auth key + initial sshd config in the same handoff. no email-shipped credentials. nothing is logged to the operator side.
// HARDENING BASELINE — WHAT SHIPS BY DEFAULT
$ cat /etc/xmrhost/baseline.d/*
Every I2P Node 1 ships with the xmrhost hardening baseline applied on the first boot — no opt-in flag, no add-on, no separate purchase. The baseline is the same across the catalog (vps / dedicated / gpu / tor / i2p / lokinet); category-specific extras are listed below the common section. Detailed per-control runbooks live in /docs; the cross-cutting overview is at /hardening.
- KERNEL. KSPP-baseline sysctls applied
(
kernel.kptr_restrict=2,kernel.yama.ptrace_scope=1,kernel.unprivileged_bpf_disabled=1,vm.unprivileged_userfaultfd=0,net.ipv4.tcp_syncookies=1, +12 more), unprivileged user-namespace creation gated, kexec disabled at runtime. Full list and rationale: /docs/kernel-hardening-checklist. - SSHD.
PasswordAuthentication no,ChallengeResponseAuthentication no,KbdInteractiveAuthentication no,PermitRootLogin prohibit-password,MaxAuthTries 3, Ed25519-only host keys (RSA host keys removed), legacy KEX / cipher / MAC families disabled. fail2ban preconfigured with the sshd-default ruleset. Runbook: /docs/harden-sshd; key migration: /docs/ssh-key-migration. - AUDIT. auditd enabled with the
laurel-compatible default ruleset (auth, identity, network-config,
time-change, mount, perm-mod). unattended-upgrades on for
main/securityonly — feature releases stay operator-controlled. systemd-journald persistent storage withSystemMaxUse=512M. - NETWORK. Egress-default-permit (the box reaches the internet), ingress-default-deny (only sshd + the customer's declared services). Outbound port 25 (SMTP) closed by default; customers operating a real MTA request the lift via /contact with the reverse-DNS pointing to a domain they control. Dual-stack IPv4 + IPv6 (/64 routed). RIPE- allocated PI on Iceland and Romania.
- MONITORING. node_exporter (Prometheus textfile
exporter) listening on
127.0.0.1:9100— the operator's monitoring scrapes via wireguard from the management VLAN, never from the public internet. Customers wanting their own metrics tap add a second exporter on a private interface. - I2P PRECONFIG. i2pd ≥ 2.50 from upstream apt repo, configured as a router by default with floodfill opt-in via /etc/i2pd/i2pd.conf, monitoring port bound to loopback for the eepsite controller. Runbook: /docs/setup-i2p-floodfill.
// the baseline is editorial-stable — when the operator changes a default, the change is logged in /notes with the rationale and the migration notes for boxes already in service. /hardening is the canonical pillar; /docs is the procedural manual.
// RECOMMENDED PLAYBOOKS
$ grep -l 'i2p-1' /usr/share/doc/xmrhost/playbook/
- /playbook/tor-relay — operate a tor middle/exit relay or obfs4 bridge with bgp-stable uplinks and a sane abuse posture
- /playbook/vpn — self-hosted wireguard / openvpn endpoint — your trust boundary is the vps, not a third-party provider
- /playbook/forum — discourse / lemmy / phpbb at offshore latency — narrow-takedown jurisprudence and no first-strike termination
// FAQ
$ faq -p i2p-1
Q.What's the difference between a router, a floodfill, and an eepsite?
A.A router is a normal I2P participant that proxies its own and (optionally) other users' traffic. A floodfill is a well-connected router that participates in the DHT for the network's address book — only well-resourced routers should be floodfills (high uptime, stable bandwidth). An eepsite is a hidden-service equivalent — a TCP service tunneled through I2P. The i2p-1 tier ships configured as a router by default; floodfill is opt-in via /etc/i2pd/i2pd.conf. Walkthrough: /docs/setup-i2p-floodfill and /notes/i2p-floodfill-vs-router.
Q.Should I run a floodfill?
A.Only if the box has high uptime (95%+ monthly), 200+ Mbps reserved bandwidth, and sysadmin attention available. Floodfills carry the network's DHT — running an unreliable floodfill degrades the network's addressbook reliability. /notes/i2p-floodfill-vs-router covers the trade-off honestly; the operator's recommendation for a first I2P deployment is a regular router, then promote to floodfill once the box has demonstrated 30 days of stable uptime.
Q.Do you accept I2P abuse reports?
A.Per the AUP (/legal/aup) — same routing as for any other workload. The operator does not have access to the contents of routed I2P traffic (the I2P transport is end-to-end encrypted and the local node is participating, not de-anonymising). Complaints are handled at the AUP layer; CSAM is the explicit exclusion.
Q.Is there a Tor + I2P bridge / cross-network relay?
A.Not preconfigured — the i2p-* tier is I2P-only. Customers wanting to bridge networks (Tor + I2P) typically run two separate boxes with explicit boundary controls. The /vs/i2p-vs-tor-vs-lokinet comparison covers when each makes sense; the i2pd documentation (https://i2pd.readthedocs.io) covers cross-network configuration for advanced operators.
Q.Where is the I2P node hosted?
A.Iceland (Reykjavik, RIPE) or Romania (Bucharest, RIPE). Both are inside the I2P network's geographic distribution (the network is dominated by EU + US routers, so an additional well-connected EU node is welcome rather than redundant). Jurisdictional posture: /location/is and /location/ro.
Q.Do I need to pay in Monero?
A.No — same payment rails as the rest of the catalog (XMR recommended, BTC / Lightning / LTC / ETH / USDT accepted via OxaPay). For an operator running an I2P node, paying in XMR is the consistent threat-model choice; the chain-analytics rationale at /why-monero applies.
// ORDER
$ xmrhost-cli order --plan=i2p-1// no-kyc crypto billing (xmr recommended; btc / ltn / ltc / eth / usdt accepted) — why-monero covers the rationale, payments the flow.
// BEFORE YOU ORDER — RELEVANT GUIDES
$ ls /guide
- /guide/buy-vps-with-monero — step-by-step XMR checkout walkthrough.
- /guide/buy-vps-with-bitcoin — BTC / Lightning flow + chain-analytics caveat.
- /guide/how-to-host-a-website-anonymously — three-tier threat-model guide.
- /guide/best-offshore-vps-2026 — evaluation methodology + plan-to-use-case mapping.