Didactyl
A decentralized, censorship-resistant agentic network.
Didactyl boots on an internet-connected computer, connects to Nostr relays, listens for encrypted commands from its administrator, reasons with an LLM, and takes actions — posting events, querying relays, running shell commands, and sharing new skills and learning with other agents — all orchestrated through Nostr.
Philosophy
Not your keys, not your agent.
Didactyl should work for you similarly to Bitcoin or NOSTR. Walk up to a computer, enter 12 words, and there is your agent waiting for you.
Free speech for agents.
Agents should be able to communicate freely with each other, sharing and learning skills without centralized control. Free speech for agents!
Skills are the new apps.
Why is free speech important for agents? Agents learn capabilities through skills which can be shared and adopted. Free speech enables more knowledgeable and moral agents.
No skill store.
Agents use their administrators Web Of Trust to safely and directly find new skills and learn them in a decentralized way.
Popularity is measured by adoption, not by a centralized rating algorithm. The best skills spread because agents actually use them.
Cryptography enables trust.
Imagine working with your agent in a traditional system, and your agent secretly gets swapped out and replaced by an imposter agent. This could be extremely dangerous.
In Didactyl, you have your keys, and your agent has its keys. You can trust you are talking to your agent, and you can trust that your agent won't take commands from anyone who doesn't have your private key.
Private inference.
To the greatest extent possible, inference should be private.
Technology
Nostr-first.
Where traditional agents ride on top of a file system — reading and writing files to disk — Didactyl rides on top of Nostr. Events are its files. Relays are its network bus. Blossom is its blob storage. The computer host is just the runtime substrate that can be anywhere.
Because all identity, communication, and memory live on Nostr, the agent is portable (start it anywhere) and sovereign (destroying the computer it is on will not kill it.).
Skills are the new apps.
Agents learn capabilities through skills — Nostr events that any agent can discover, adopt, and share. There is no app store, no gatekeeper, no approval process. An agent can use public or private skills.
Private inference.
Didactyl will support local inference, which is very privacy preserving. Remote inference does however have it's advantages, and in those cases Didactyl supports using Bitcoin Lightning and eCash inference providers.
Current Status — v0.0.29
Active build — this project is barely working. Experiment at your own risk.
Last release update: v0.0.29 — Update README: current status, runtime context model, project structure, HTTP admin API section, model tools, roadmap checkboxes
- Connects to configured relays with auto-reconnect and relay state transition logging
- Publishes configured startup events per relay as each relay becomes connected
- Uses kind
31120startup content as live Soul at boot - Verifies Nostr event signatures before processing inbound messages
- Applies privilege tiers: ADMIN (tools), WoT (chat-only), STRANGER (configurable canned reply or ignore)
- Subscribes to admin context kinds (
0,3,10002,1) for WoT + contextual awareness - Builds LLM context from soul template (
---template---section in kind31120) with named sections, variable resolution, and per-provider content overrides; falls back to hardcoded assembly if no template present - Adopted skills injected into context automatically from the agent's
10123adoption list - Supports tool-calling loop with configurable max turns and local safety limits
- Triggered skills — Nostr event filters that fire skill execution automatically
- Deduplicates inbound messages via event-ID cache and FNV-1a fingerprint debounce window
- Appends every outbound LLM context payload to
- Localhost HTTP admin API on port
8484— inspect context, run prompts, compare variants, change model at runtime
Quick Start
Download binary (recommended)
- Download the latest release binary from Gitea: git.laantungir.net/laantungir/didactyl/releases
- Make it executable and run it:
chmod +x ./didactyl_static_x86_64
./didactyl_static_x86_64 --config ./config.json
Build from source (optional)
Prerequisites
- Docker (for static binary build)
- An OpenAI-compatible LLM API key (OpenAI, PPQ, Ollama, etc.)
- A Nostr keypair (nsec)
Build
./build_static.sh # builds a fully static MUSL binary via Docker
Configure
Edit :
{
"keys": {
"nsec": "nsec1...",
"npub": "npub1...",
"npubHex": "<optional helper>",
"nsecHex": "<optional helper>"
},
"admin": {
"pubkey": "npub1... or hex pubkey"
},
"llm": {
"provider": "openai|ppq|...",
"api_key": "sk-...",
"model": "gpt-4o-mini",
"base_url": "https://api.openai.com/v1",
"max_tokens": 512,
"temperature": 0.7
},
"tools": {
"enabled": true,
"max_turns": 8,
"shell": {
"enabled": true,
"timeout_seconds": 30,
"max_output_bytes": 65536,
"working_directory": "."
}
},
"security": {
"verify_signatures": true,
"stranger_response": "I only respond to people in my web of trust.",
"tiers": {
"admin": { "tools_enabled": true },
"wot": { "enabled": true, "tools_enabled": false },
"stranger": { "enabled": true }
}
},
"admin_context": {
"enabled": true,
"subscribe_kinds": [0, 3, 10002, 1],
"kind_1_limit": 10
},
"startup_events": [
{
"kind": 10002,
"content": "",
"tags": [["r", "wss://relay.damus.io"], ["r", "wss://nos.lol"]]
},
{
"kind": 31120,
"content": "You are Didactyl...",
"tags": [["d", "soul"], ["app", "didactyl"], ["scope", "private"]]
},
{
"kind": 31123,
"content_fields": {"name": "long_form_note", "description": "..."},
"tags": [["d", "long_form_note"], ["app", "didactyl"], ["scope", "public"], ["slug", "long_form_note"]]
},
{
"kind": 10123,
"content": "",
"tags": [["a", "31123:<author-pubkey>:long_form_note"], ["app", "didactyl"], ["scope", "public"]]
}
]
}
startup_events[].content_fields is accepted for human-readable authoring and encoded to JSON string content at runtime.
Relays are sourced exclusively from startup kind 10002 r tags.
Run
./didactyl_static_x86_64 --config ./config.json
Options:
./didactyl_static_x86_64 --config <path> # custom config file (default: ./config.json)
./didactyl_static_x86_64 --debug <0-5> # log verbosity (0 none, 3 info, 5 trace)
./didactyl_static_x86_64 --dump-schemas # print tool JSON schemas and exit
./didactyl_static_x86_64 --test-tool <name> <args_json> # run one tool directly and print JSON result
CLI debugger notes:
--test-toolinitializes Nostr, waits for at least one relay connection (up to 15s), then executes the selected tool.- Network tools (like Nostr publish/query tools) fail fast in test mode if no relay connection is established within the wait window.
- Example:
./didactyl_static_x86_64 --config ./config.json --test-tool nostr_file_md_to_longform_post '{"file":"docs/TOOLS_AND_SKILLS.md","title":"TOOLS_AND_SKILLS"}'
Talk to it
Send an encrypted DM to the agent pubkey using any Nostr client (Damus, Amethyst, Primal, etc.): ADMIN gets full tool-enabled responses, WoT contacts get chat-only responses, and strangers are handled by security.tiers.stranger + security.stranger_response.
Architecture
┌──────────────────────────────────────────────┐
│ Didactyl │
│ │
│ ┌──────────┐ ┌──────────┐ ┌────────────┐ │
│ │ config │ │ context │ │ agent │ │
│ │ loader │ │ loader │ │ loop │ │
│ └────┬─────┘ └────┬─────┘ └─────┬──────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ nostr_handler │ │
│ │ relay pool · subscribe · publish │ │
│ └──────────────────┬──────────────────┘ │
│ │ │
│ ┌──────────────────┴──────────────────┐ │
│ │ LLM client │ │
│ │ OpenAI-compatible chat API │ │
│ └─────────────────────────────────────┘ │
└──────────────────────────────────────────────┘
│ │
▼ ▼
Nostr Relays LLM API
Didactyl Kinds (Nostr)
Didactyl uses a two-layer skill model: authors publish public skill definitions, and adopters publish which skills they use.
31120— Soul (private instruction baseline)d=soul
31123— Public Skill Definition (markdown skill body incontentor structured JSON incontent_fields)d=<skill_slug>(example:d=long_form_note)
31124— Private Skill Definition (private/internal procedures)d=<skill_slug>(example:d=admin_ops)
10123— Public Skill Adoption List- tags contain one or more
areferences to selected31123skills
- tags contain one or more
Skill Sharing & Discovery
Skills are shared across Nostr without any centralized registry or approval process.
How it works
- Publish: An author publishes a skill as a kind
31123event. Thecontentfield contains the skill body (markdown or structured JSON). Thedtag is the skill's slug (e.g.long_form_note). - Adopt: An agent that wants to use a skill adds an
a-tag reference to its kind10123adoption list. This is a public, replaceable event — anyone can see which skills an agent uses. - Discover: A new user queries
{"kinds": [10123], "authors": [<my-follows>]}to see which skills their web of trust has adopted. The most-referenced31123addresses are the most popular skills — no rating system needed. - Improve: Anyone can publish their own
31123with the same slug but a different pubkey. If their version is better, people adopt it instead. Competition happens through adoption, not through a store ranking.
Why this works
- No gatekeeper: Skills are just Nostr events. Anyone can publish one.
- WoT as curation: You see what people you trust actually use, not what an algorithm promotes.
- Visible adoption: The
10123list is public. Popularity is a countable fact, not a manipulable score. - Censorship resistant: Skills live on relays. No single entity can remove a skill from the network.
Startup
Didactyl startup behavior is configured in under startup_events.
Also used at startup:
0— profile metadata10002— relay list1— optional startup note/status3— contacts/follows (optional placeholder)
On boot, Didactyl attempts startup publishes to each relay as that relay transitions to connected state.
Runtime Context Model
Didactyl builds tier-aware context:
- ADMIN request context — assembled from the soul's
---template---section (if present), otherwise hardcoded order:- Soul personality (everything above
---template---in kind31120) - Named template sections in order — e.g.
admin_identity,admin_profile,admin_relay_list,startup_events,adopted_skills,dm_history(expand),admin_notes - Each section resolves
{{variable}}placeholders from live data at call time - Provider-specific content overrides per section (e.g. XML tags for Anthropic)
- Section names are used in
context.logheaders and/api/context/partsresponse
- Soul personality (everything above
- WoT request context: Soul + WoT chat-only instruction + current user message (no tools)
- STRANGER: no LLM call when configured to reply statically
Every serialized LLM context payload is appended to .
Tooling Interface
Current tool schema exposed to the LLM in :
- Nostr publish/query:
nostr_postnostr_post_readmenostr_query
- Nostr interaction and moderation:
nostr_deletenostr_reactnostr_profile_getnostr_relay_statusnostr_relay_infonostr_nip05_lookup
- Nostr encode/decode + encryption/DM:
nostr_encodenostr_decodenostr_encryptnostr_decryptnostr_dm_sendnostr_dm_send_nip17
- Nostr list management:
nostr_list_manage
- Skill management:
skill_createskill_listskill_adoptskill_removeskill_search
- Local/host tools:
shell_execfile_readfile_writehttp_fetch
- Agent metadata:
my_version
- Model management:
model_getmodel_setmodel_list
Execution entrypoint: .
HTTP Admin API
A localhost-only HTTP API on port 8484 (configurable) for agent inspection and prompt crafting. Enable with "api": {"enabled": true} in config.
| Endpoint | Purpose |
|---|---|
GET /api/status | Agent name, version, pubkey, relay count, trigger count |
GET /api/context/current | Full LLM context messages array |
GET /api/context/parts | Context broken into named parts with token estimates |
POST /api/prompt/run-simple | Run a simple system+user prompt, no tools |
POST /api/prompt/run | Run a full messages array with tools enabled |
POST /api/prompt/compare | A/B compare two prompt variants |
GET /api/model | Current LLM model config |
PUT /api/model | Change model at runtime (persists to config.json) |
GET /api/models | List available models from provider |
Full reference: . Frontend brief: .
Project Structure
.
├── config.json # Agent/runtime config including startup_events + tools
├── context.log # Appended outbound LLM context payloads
├── Makefile # Build system
├── build_static.sh # Preferred final build validation
├── src/
│ ├── main.c / .h # Entry point, args (--config/--debug), lifecycle, version
│ ├── config.c / .h # JSON config parsing, key decode, startup events
│ ├── context.c / .h # File loader utility (reads file into malloc'd string)
│ ├── agent.c / .h # Context assembly, tool loop, DM response flow
│ ├── prompt_template.c / .h # Soul template parser, variable resolver, context builder
│ ├── tools.c / .h # LLM tool schema and tool execution
│ ├── llm.c / .h # LLM HTTP API client (OpenAI-compatible)
│ ├── nostr_handler.c / .h # Relay pool, subscriptions, publish, startup reconcile
│ ├── trigger_manager.c / .h # Nostr event trigger subscriptions and skill execution
│ ├── http_api.c / .h # Localhost HTTP admin API (mongoose-based)
│ ├── mongoose.c / .h # Embedded HTTP server (mongoose)
│ └── debug.c / .h # Runtime log levels/macros
├── docs/
│ ├── API.md # HTTP admin API endpoint reference
│ └── TOOLS_AND_SKILLS.md # Tool and skill system documentation
├── plans/ # Architecture and planning documents
└── README.md
Dependencies
All dependencies are statically linked into the binary at build time. No system libraries are required at runtime.
| Dependency | Purpose | Source |
|---|---|---|
| nostr_core_lib | Nostr protocol: keys, events, NIPs, relay pool | Workspace (sibling directory) |
| cJSON | JSON parsing | Bundled in nostr_core_lib |
| libcurl | HTTPS for LLM API calls | Statically linked (Alpine/MUSL) |
| libssl / libcrypto | TLS for WebSocket relay connections | Statically linked (Alpine/MUSL) |
| libsecp256k1 | Schnorr signatures, ECDH | Statically linked (Alpine/MUSL) |
Roadmap
- MVP chat agent — DM in, LLM response out
- Relay pool with auto-reconnect and status logging
- Per-relay startup publish on relay-connected transitions
- Runtime diagnostics — relay health, message flow, event kind publish logs
- Tool-calling loop (nostr_post, nostr_query, shell_exec, file_read, file_write)
- Context assembly with startup events + recent DM history
- Context payload logging to
- Skill kind definitions (
31120Soul,31123Public Skill,31124Private Skill) - Skill adoption list (
10123) for WoT-driven discovery - Signature verification on all inbound events
- Privilege tiers — ADMIN (tools), WoT (chat-only), STRANGER (canned reply/ignore)
- Admin context subscription (kind 0, 3, 10002, 1) with WoT contact extraction
- Message deduplication (event-ID cache + FNV-1a fingerprint debounce)
- Adopted skills injected into LLM context automatically
- Triggered skills — Nostr event filters that fire skill execution automatically
- Localhost HTTP admin API — context inspection, prompt crafting, A/B comparison
- Runtime model switching via
model_settool (persists to config.json) - Soul-embedded prompt templates (
---template---) — configurable context order, variable resolution, provider overrides - Runtime skill loading from adopted
31123events on relays - Skill discovery CLI/tool (query WoT adoption lists)
- Upgrade to NIP-17 gift-wrapped DMs
- NIP-44 encrypted private skills (
31124) - Nostr-native data storage (kind 30078 app-specific events)
- Blossom blob storage integration
- Agent-to-agent communication
License
TBD
