Server Configuration
This section describes the options available for configuring and running the moor-daemon server binary.
For a deeper discussion of mooR's threading model, database concurrency model, performance counters, and tuning guidance, see Performance and Concurrency.
Daemon, Hosts, Workers, and RPC
The moor-daemon server binary provides the main server functionality, including hosting the database, handling verb
executions, and scheduling tasks. However it does not handle network connections directly. Instead, special helper
processes called hosts manage incoming network connections and forward them to the daemon. Likewise, outbound network
connections (or future facilities like file access) are handled by workers that communicate with the daemon to perform
those activities.
To run the server, you therefore need to run not just the moor-daemon binary, but also one or more "hosts" (and,
optionally "workers") that will connect to the daemon.
These processes communicate over ZeroMQ sockets, with the daemon listening for RPC requests and events, and the hosts and workers connecting to those sockets to send requests and receive responses.
Hosts and workers can be run on the same machine as the daemon (the default) or distributed across multiple machines for clustered deployments. They are stateless and can be restarted independently of the daemon, allowing for flexible deployment and scaling.
Transport Modes
For single-machine deployments (the default), components communicate via IPC (Unix domain sockets) which use filesystem permissions for security and require no additional configuration.
For clustered/multi-machine deployments, components communicate via TCP with CURVE encryption. See the Clustered Deployment guide for complete details on distributed deployments, security considerations, and setup instructions.
Authentication Keys
PASETO Keys (Ed25519) - Client/Player Authentication
PASETO tokens authenticate clients/players (connecting users) using Ed25519 digital signatures. These are used * only by the daemon* to sign and verify player session tokens.
The daemon automatically generates these keys on first run when using the --generate-keypair flag:
# Keys are auto-generated on first run
moor-daemon --generate-keypair <other-args>
This creates moor-signing-key.pem (private key) and moor-verifying-key.pem (public key) in the moor config
directory (${XDG_CONFIG_HOME:-$HOME/.config}/moor).
Alternatively, you can pre-generate them using openssl:
openssl genpkey -algorithm ed25519 -out moor-signing-key.pem
openssl pkey -in moor-signing-key.pem -pubout -out moor-verifying-key.pem
Note: Hosts and workers do not need these PEM files - they are only used by the daemon for client authentication.
How to set server options
In general, all options can be set either by command line arguments or by configuration file. The same option cannot be set by both methods at the same time, and if it is set by both, the command line argument takes precedence over the configuration.
Configuration File Format
The configuration file uses YAML format. You can specify the path to your configuration file using the --config-file
command-line argument. Configuration file values can be overridden by command-line arguments.
General Server Options
These options control the basic server behavior:
--config-file <PATH>: Path to configuration (YAML) file to use. If not specified, defaults are used.--connections-file <PATH>(default:connections.db): Path to connections database--tasks-db <PATH>(default:tasks.db): Path to persistent tasks database--public-key <PATH>(default:${XDG_CONFIG_HOME:-$HOME/.config}/moor/moor-verifying-key.pem): PEM encoded PASETO public key for token verification--private-key <PATH>(default:${XDG_CONFIG_HOME:-$HOME/.config}/moor/moor-signing-key.pem): PEM encoded PASETO private key for token signing--num-io-threads <NUM>(default:8): Number of ZeroMQ IO threads--debug(default:false): Enable debug logging
Transport Endpoint Configuration
These options configure how the daemon communicates with hosts and workers. The defaults use IPC (Unix domain sockets) for single-machine deployments. Change these to TCP addresses (e.g., tcp://0.0.0.0:7899) only for clustered deployments - see Clustered Deployment for details.
| Option | Default | Description |
|---|---|---|
--rpc-listen | ipc:///tmp/moor_rpc.sock | RPC server address |
--events-listen | ipc:///tmp/moor_events.sock | Events publisher address |
--workers-request-listen | ipc:///tmp/moor_workers_request.sock | Workers request pub-sub address |
--workers-response-listen | ipc:///tmp/moor_workers_response.sock | Workers response RPC address |
Enrollment Configuration (Clustered Deployments Only)
These options are only needed for clustered deployments with TCP transport. See Clustered Deployment for complete setup instructions.
| Option | Default | Description |
|---|---|---|
--enrollment-listen | tcp://0.0.0.0:7900 | Enrollment endpoint for host/worker registration |
--enrollment-token-file | ${XDG_CONFIG_HOME:-$HOME/.config}/moor/enrollment-token | Path to enrollment token file |
Database Configuration
<PATH>(positional argument): Path to the database directory--db <NAME>(default:world.db): Name of the main database within the directory--connections-file <PATH>(default:connections.db): Path to connections database (relative to data directory if not absolute)--tasks-db <PATH>(default:tasks.db): Path to persistent tasks database (relative to data directory if not absolute)--events-db <PATH>(default:events.db): Path to persistent events database (relative to data directory if not absolute)
The first positional argument specifies the database directory (typically moor-data or similar). The daemon stores several databases within this directory by default:
world.db/(or name specified by--db) - The main MOO databaseconnections.db- Connection state databasetasks.db- Persistent tasks databaseevents.db- Event logging database (if event logging is enabled)
All database paths can be customized and are relative to the data directory unless specified as absolute paths.
Language Features Configuration
These options enable or disable various MOO language features:
| Feature | Command Line | Default | Description |
|---|---|---|---|
| Rich notify | --rich-notify | true | Allow notify() to send arbitrary MOO values to players |
| Lexical scopes | --lexical-scopes | true | Enable block-level lexical scoping with begin/end syntax and let/global keywords |
| Type dispatch | --type-dispatch | true | Enable primitive-type verb dispatching (e.g., "test":reverse()) |
| Flyweight type | --flyweight-type | true | Enable flyweight types (lightweight object delegates) |
| Boolean type | --bool-type | true | Enable boolean true/false literals |
| Boolean returns | --use-boolean-returns | false | Make builtins return boolean types instead of integers 0/1 |
| Symbol type | --symbol-type | true | Enable symbol literals |
| Custom errors | --custom-errors | false | Enable error symbols beyond standard builtin set |
| Symbols in builtins | --use-symbols-in-builtins | false | Use symbols instead of strings in builtins |
| List comprehensions | --list-comprehensions | true | Enable list/range comprehensions |
| Persistent tasks | --persistent-tasks | true | Enable persistent tasks between server restarts |
| Event logging | --enable-eventlog | true | Enable persistent event logging and history features |
| Anonymous objects | --anonymous-objects | false | Enable anonymous objects with automatic garbage collection |
| UUID objects | --use-uuobjids | false | Enable UUID object identifiers like #048D05-1234567890 |
Import/Export Configuration
These options control database import and checkpoint export functionality:
--import <PATH>: Path to a textdump or objdef directory to import--export <PATH>: Path to export checkpoints into (always uses objdef format)--import-format <FORMAT>(default:Textdump): Format to import from (Textdump or Objdef)--checkpoint-interval-seconds <SECONDS>: Interval between database checkpoints
Runtime Timing Configuration
These options control latency duration sampling for internal performance counters. Invocation counts remain exact; these settings only affect duration collection.
These counters are used to observe hot runtime paths such as scheduler wakeups, lock waits, database commit stages, builtin execution, and other VM execution activity. In the normal server configuration, the system does not record a full timestamp pair for every hot-path event. Instead, it samples durations and scales the totals back up. This keeps the counters cheap enough to leave enabled in regular use.
In practice, these settings are mostly useful in three situations:
- You are benchmarking and want exact latency measurements rather than sampled estimates.
- You are chasing a performance regression and want denser timing data from hot paths.
- You want to reduce timing overhead further and are willing to trade away duration fidelity.
| Setting | Command Line | Default | Description |
|---|---|---|---|
| Perf timing enabled | --perf-timing-enabled <BOOL> | true | Enable or disable latency duration collection globally |
| Hot-path sample shift | --perf-timing-hot-path-shift <NUM> | 6 | Sampling shift for hot paths. 0 means exact, 6 means 1/64 |
| Medium-path sample shift | --perf-timing-medium-path-shift <NUM> | 3 | Sampling shift for medium paths. 0 means exact, 3 means 1/8 |
In YAML, set these under runtime::
runtime:
gc_interval: "30s"
scheduler_tick_duration: "10ms"
perf_timing_enabled: true
perf_timing_hot_path_shift: 6
perf_timing_medium_path_shift: 3
For exact timing during benchmarking or profiling runs:
runtime:
perf_timing_enabled: true
perf_timing_hot_path_shift: 0
perf_timing_medium_path_shift: 0
To disable duration timing entirely while keeping invocation counters:
runtime:
perf_timing_enabled: false
Guidance:
- Leave the defaults alone for normal deployments. They are intended to keep timing overhead low while still producing useful long-run aggregates.
- Use
0for both sample shifts during focused benchmarking or profiling runs where exact timing is more important than hot-path overhead. - Set
perf_timing_enabled: falseif you only care about invocation counts and do not want duration timing at all. - If you are only interested in slower, less frequent operations, you can leave hot-path sampling alone and reduce only the medium-path shift.
Task Pool Affinity Configuration
Task worker affinity is configured under runtime: and can also be overridden on the command line.
The daemon has two broad thread classes:
- service or control-plane threads, such as the scheduler, RPC/event handling, and coordination work
- task worker threads, which execute verbs and other task bodies in the task pool
This is an important architectural difference from LambdaMOO-style servers. In LambdaMOO, task execution is effectively serialized through one main execution path. In moor, runnable tasks are dispatched onto a worker pool so independent task execution can proceed concurrently across multiple cores. The scheduler remains responsible for orchestration, wakeups, and queue management, while the task pool provides the actual parallel execution capacity.
That means thread placement matters more here than in a single-threaded MOO. A poor affinity choice can leave the scheduler contending with task execution on the same high-performance cores, while an appropriate split can preserve both throughput and responsiveness.
On systems with heterogeneous CPUs, especially recent x86 and ARM systems, not all cores are equal. Some cores are tuned for throughput and sustained performance, while others are tuned for efficiency or background work. The affinity settings let the daemon reserve stronger cores for task execution while leaving some capacity for the scheduler and other control-plane work.
If the runtime can identify a distinct performance-core tier, the default auto mode tries to use
that tier for task workers. If it cannot identify a meaningful split, the task pool is left
unpinned.
| Setting | Command Line | Default | Description |
|---|---|---|---|
| Task pool pinning | --task-pool-pinning <MODE> | auto | Controls whether task worker threads are pinned to detected performance cores |
| Reserved service perf cores | --service-perf-cores <NUM> | topology-based | Reserves detected performance cores for non-task service threads before assigning worker affinity |
task_pool_pinning accepts:
auto: Use the runtime's default policy.performance: Pin task workers to detected performance cores when available.none: Do not pin task worker threads.
service_perf_cores must be a non-negative integer. It reserves that many detected performance cores for service
threads. The value is clamped so that, when possible, at least one performance core remains available for task workers.
When service_perf_cores is not set, the reservation defaults are:
0for systems with0..=2detected performance cores1for systems with3..=7detected performance cores2for systems with8+detected performance cores
Examples:
runtime:
task_pool_pinning: performance
service_perf_cores: 2
# Force performance-core pinning for task workers
moor-daemon --task-pool-pinning performance ...
# Reserve two detected performance cores for scheduler / control-plane work
moor-daemon --service-perf-cores 2 ...
# Disable task-worker pinning
moor-daemon --task-pool-pinning none ...
Guidance:
- Leave this on
autounless you have measured a reason to override it. performanceis useful when you know the machine has a meaningful fast-core tier and you want task execution to stay there even if automatic detection would otherwise fall back.noneis useful inside containers, VMs, or unusual schedulers where explicit pinning hurts more than it helps.- Increase
service_perf_coresif the scheduler, RPC handling, or other daemon-side coordination work becomes a bottleneck while worker threads are saturating the faster cores. - Decrease
service_perf_coresif the machine has only a few performance cores and you want to maximize task execution throughput.
Example Configuration
Here's an example configuration file:
# Database configuration
database_config:
cache_eviction_interval: 300
default_eviction_threshold: 100000000
# Language features configuration
features_config:
persistent_tasks: true
rich_notify: true
lexical_scopes: true
bool_type: true
symbol_type: true
type_dispatch: true
flyweight_type: true
list_comprehensions: true
use_boolean_returns: false
use_symbols_in_builtins: false
custom_errors: false
enable_eventlog: true
use_uuobjids: true
anonymous_objects: true
# Import/export configuration
import_export_config:
checkpoint_interval: "60s"
# Runtime timing configuration
runtime:
perf_timing_enabled: true
perf_timing_hot_path_shift: 6
perf_timing_medium_path_shift: 3
task_pool_pinning: auto
service_perf_cores: 1
LambdaMOO Compatibility Mode
If you need to maintain compatibility with LambdaMOO 1.8, you'll need to either update your core with the changes provided in the Lambda-moor core or disable several features. Here's a configuration that maintains LambdaMOO compatibility by disabling mooR features:
# LambdaMOO 1.8 compatible features
features_config:
persistent_tasks: true
rich_notify: false
lexical_scopes: false
bool_type: false
symbol_type: false
type_dispatch: false
flyweight_type: false
list_comprehensions: false
use_boolean_returns: false
use_symbols_in_builtins: false
custom_errors: false
enable_eventlog: true
use_uuobjids: false
anonymous_objects: false
# LambdaMOO textdump import is supported
# Checkpoints always export in objdef format
Anonymous Objects Configuration
The anonymous_objects feature flag enables a new type of object that is automatically garbage collected when no longer
referenced.
This feature is disabled by default due to performance considerations.
Enabling Anonymous Objects
To enable anonymous objects, set the flag in your configuration file:
features_config:
anonymous_objects: true
Or use the command line flag: --anonymous-objects
When to Enable Anonymous Objects
Consider enabling if:
- Your MOO creates many temporary objects (game pieces, UI elements, etc.)
- You have developers who struggle with manual object cleanup
- You want to reduce the burden of object lifecycle management
- Your server has sufficient CPU resources for garbage collection overhead
Keep disabled if:
- Your MOO has strict performance requirements with minimal latency tolerance
- Your builders are experienced with manual object lifecycle management
- Your server runs on resource-constrained hardware
- You need maximum predictable performance without GC pauses
Performance Implications
Anonymous objects use a mark-and-sweep garbage collector with the following characteristics:
- CPU Overhead: The GC thread runs continuously, consuming CPU cycles even when not collecting
- Memory Usage: Same storage costs as regular objects until collection occurs
- Concurrency: Mark phase runs concurrently with normal server operations to minimize blocking but can put load on the system as it scans the entire database.
- Collection Pauses: Sweep phase can cause brief server pauses during collection cycles
The garbage collector is optimized but will impact overall server performance. Monitor your server's CPU usage and response times when enabling this feature.
Migration Considerations
When enabling anonymous objects on an existing MOO:
- Existing code using
create(parent, owner, 1)will begin creating anonymous objects - No changes needed to existing numbered or UUID object code
- Consider updating builder documentation to explain the new object type option
- Test performance impact during peak usage periods before enabling permanently