SLV v0.10.0 Released — From CLI to AI Agent: Deploy and Operate Solana Validators and RPCs Through Natural Language

SLV v0.10.0 Released — From CLI to AI Agent: Deploy and Operate Solana Validators and RPCs Through Natural Language

2026.02.25
ELSOUL LABO B.V. (Headquarters: Amsterdam, Netherlands; CEO: Fumitake Kawasaki) and Validators DAO have released SLV v0.10.0. With this release, setting up and operating Solana validators and RPCs can now be completed entirely through natural language conversation, without specialized command-line expertise.
Previously, the initial setup of a Solana validator required CLI proficiency, manual configuration file editing, and memorization of procedures — a process that typically took hours to days. With SLV v0.10.0, deployment is completed through a brief conversation with an AI agent. This release structurally lowers the barrier to entry for the Solana network.

Who Benefits, and How

For those starting a new Solana validator — Setting up a Solana validator has traditionally been a highly technical undertaking. Which commands to run in which order, which configuration values are appropriate, which version to use — a single misjudgment in any of these can prevent stable participation in the network. With SLV v0.10.0, an AI agent handles these decisions accurately. Simply describe what you need, and the agent automatically assembles the required steps, confirms with you, and executes.
For existing validator operators — Solana is currently in a phase of frequent version changes and rollbacks as the network upgrades to v3. The operational burden is significant, with each cycle demanding time-consuming procedure verification and execution. SLV v0.10.0 enables upgrades, downgrades, restarts, and identity switches — the day-to-day operational tasks — to be completed entirely through natural language conversation.
For the Solana ecosystem as a whole — The quality of the Solana network depends directly on the operational quality of each validator. When barriers to entry remain high, operator diversity is limited, constraining the network's overall decentralization and resilience. Lowering the barrier to entry while maintaining operational quality is essential for the healthy growth of the Solana ecosystem.

From CLI to AI Agent — What Changed Technically

SLV has been available as a CLI tool until now. In v0.10.0, that CLI foundation is fully preserved, while a new layer enables AI agents to operate it with precision.
text
You: Deploy a mainnet Jito validator on 203.0.113.10
Agent: I'll set up a mainnet Jito validator. Let me walk you through...
Operators no longer need to memorize commands or manually edit configuration files. The AI agent selects the appropriate procedures, proposes configuration values, verifies through a dry run, and then proceeds to execution.
Critically, this is not a system that hands Solana operations over to a generic AI.

Why This Only Works with SLV's AI Agent Skills

Attempting to automate Solana validator operations with a generic AI alone does not produce stable results. Validator operations involve numerous hard-to-document nuances — version-specific prerequisites, network configuration differences, and rollback decisions during incidents. When an AI operates without this knowledge, ambiguous procedures are executed, risking degraded validator performance and lower network quality.
The AI Agent Skills provided in SLV v0.10.0 systematize the real-world operational knowledge accumulated through SLV's development and operation into a form that AI agents can reference accurately. They cover the complete mapping between CLI commands and Ansible playbooks, recommended versions, safe operational practices, and common pitfalls.
Because validator operations demand trust, the precision of the foundation that an AI agent references is critically important. SLV provides that foundation.

Three Production-Ready Skills

SLV v0.10.0 introduces three AI Agent Skills:
slv-validator — A skill for deploying and managing mainnet and testnet validators, supporting Jito, Agave, and Firedancer configurations.
slv-rpc — A skill specialized for deploying and managing RPC nodes, covering Standard, Index, and Geyser gRPC configurations.
slv-grpc-geyser — A skill for deploying and managing gRPC Geyser streaming, supporting Yellowstone and Richat.
Each skill includes SKILL.md with comprehensive operational knowledge, AGENT.md defining interactive deployment flows, an automated prerequisite installation script, and sample inventory files.
Skills are composed of plain Markdown and Ansible, with no lock-in to any specific AI agent. They work with OpenClaw, Claude Code, Codex, Cursor, Windsurf, and any other AI coding agent. You can also run the Ansible playbooks directly without an AI agent.

Full Firedancer Support

v0.10.0 significantly expands Firedancer support. It now officially supports firedancer-agave and firedancer-jito validator types, with parameterized config templates (hugepages, ports, identity), service management via firedancer.service, and hugetlbfs cleanup for Firedancer deployments. As Firedancer gains attention as Solana's next-generation validator client, full support from SLV makes it easier for more operators to adopt Firedancer.

Safety by Design — Dry-Run First

When an AI agent performs operations, SLV always proposes a dry run (--check mode) first. Changes are reviewed before execution, and the operator approves before anything is applied.
Note that AI agents behave differently depending on the prompts and instructions they receive. While SLV's skills provide an accurate operational foundation for the AI agent, the final responsibility for execution decisions and their outcomes rests with the operator. This is no different from traditional CLI operations — the form of the tool changes, but the ownership of operational responsibility does not.

WBSO Approved for Five Consecutive Years — Where Research Meets Implementation

ELSOUL LABO has been approved under the WBSO (Wet Bevordering Speur- en Ontwikkelingswerk), the Dutch government's R&D support program, for five consecutive years since 2022. Among the research projects approved for 2026 is "Research and Development on Automation of Validator Placement and Operational Orchestration" — and SLV v0.10.0's AI Agent Skills represent the direct implementation of this research theme.
ELSOUL LABO's research and development is not separated from real-world implementation and operations. Research hypotheses take shape as implementations, are validated under operational constraints, and the challenges discovered feed back into the next cycle of research. SLV v0.10.0 was born from this cycle.

Looking Ahead

SLV will continue to pursue greater precision and more advanced automation through integration with MCP (Model Context Protocol).
AI agent-driven validator operations are just getting started. While the current release already enables deployment through daily operations to be completed via natural language, MCP integration will unlock even more advanced automation. For example, automated failover — a complex, multi-step procedure where failure is not an option — can be executed with greater precision by AI agents through MCP. Monitoring-based decision making, integrated orchestration across multiple nodes, and other capabilities that further elevate operational reliability lie ahead.
What SLV provides is the trusted foundation that supports this evolution. Not vague AI adoption, but AI agent collaboration backed by precise operational knowledge. SLV will continue to evolve as the foundation that structurally supports Solana's operational quality and creates an environment where anyone can participate on the same terms.