Distributed config collection. One agent per site, every device covered.
The rConfig Vector Agent is a single Go binary that runs as a systemd or Windows service at each site. It polls your network devices over SSH, SNMP, and HTTP, then ships the configs back to rConfig over outbound only TLS. No firewall pinholes. No VPN.
A first-class component of rConfig, the open-source, enterprise-grade network configuration management platform.
- Go 1.24
- Linux · Windows
- SSH · SNMP · HTTP
- rConfig V8+
What is a network configuration collector agent?
A network configuration collector agent is a small piece of software you run at each site to poll local network devices, capture their configurations, and ship the results back to a central NCM server. The rConfig Vector Agent does this over outbound TLS, so no inbound firewall rules are needed.
One central server cannot reach every device on your network. It never could.
Real networks are segmented, regional, and political. A device in Frankfurt cannot be polled from New York without a firewall exception, a jump host, or a long running VPN. A device in a tenant VRF cannot be reached from the management VRF at all. A device in an OT segment is supposed to be unreachable from IT by design.
Traditional network configuration management runs from one host. That host accumulates credentials, firewall rules, and blast radius. It also becomes the bottleneck: every SSH poll, every SNMP walk, every diff ingest queues behind a single collector.
rconfigvector, the rConfig Vector Agent, is a single Go binary you deploy next to the devices. It fetches jobs from rConfig over an outbound only API, talks to devices on the local segment, and ships results back through the same secure channel. No inbound firewall rules. No credentials leaving the site. No central bottleneck.
Built for real networks, not lab networks
Six engineering decisions that turn a tiny Go binary into a restart safe, size bounded, outbound only collector for distributed network configuration management.
- 01
Dynamic worker pool, Go-native
Workers scale up and down from the live settings snapshot. No restart, no deploy. Each agent scales up to 1,000 devices per instance, with a watchdog per worker so a hung SSH session never takes the fleet down with it.
- 02
SSH, SNMP, Uptime, HTTP: all real
Full interactive shell SSH with enable mode, pager handling, and legacy device quirks (Avaya Ctrl+Y, HP ProCurve press any key, Cisco ETM MACs). The same protocol surface your core collector supports.
- 03
SQLite-backed durable queues
Jobs and logs persist to local SQLite in WAL mode. Agent restarts, network blips, and server side outages do not drop work. Everything replays from disk when connectivity returns.
- 04
Outbound-only, key-authenticated
The agent dials out to rConfig over TLS with an API key and optional strict SSL verification. No inbound ports. No credentials shipped to the core server. Your infosec team will actually approve it.
- 05
Size-bounded log sync, 413-aware
Log batches are built by byte size, not just count. The client splits oversized batches, quarantines pathological records, and stores bounded previews of remote error bodies. No memory blow-ups on a bad HTML error page.
- 06
Graceful shutdown, no lost jobs
SIGTERM closes the quit channel, workers finish the job in hand, the main goroutine blocks on sync.WaitGroup, and only then do DB handles close. Restart the service and you lose nothing.
Who actually runs the Vector Agent
Every scenario here is a real team running the agent today. Enterprises, MSPs, and OT side network operators. Pick the one that sounds most like you.
- Multi-site enterprise
“I manage devices in 40 branch offices behind separate firewalls.”
Drop a Vector Agent at each site. Each agent polls its local devices and syncs back over outbound only TLS. Zero inbound firewall rules, zero site-to-site VPN required.
Read the docs: multi-site deployment - MSP
“I am collecting configs from 300 customers on separate networks.”
One agent per tenant, keyed to one rConfig server. Configs stay isolated per customer, credentials never leave the customer site, and billing is straightforward because every job is tagged to its agent.
Read the docs: MSP scenario - OT / air gap
“I need to pull configs from an air gapped OT network.”
Run the agent inside the OT segment with a one way outbound rule to rConfig. SSH polls stay on the OT side; only signed, structured job results cross the boundary. No IT side credentials inside OT.
Read the docs: air gap and OT - Horizontal scale
“I want to offload collection load from my central rConfig server.”
Point heavy polling device groups at a dedicated Vector Agent. The core server only receives structured job results. No SSH wait time, no TCP state, no credential handling in the hot path.
Read the docs: horizontal scale - Windows estate
“My Windows server room cannot run Linux. I still need config backups.”
Ship the Windows build. Same binary semantics, same API contract, runs as a Windows service. Network engineers on Windows only estates get the same distributed collection story as their Linux peers.
Read the docs: Windows service - Audit and zero trust
“I need zero trust proof of what ran, on which device, by whom.”
Every job carries a ULID, timing data, and a structured log trail that syncs back to rConfig with bounded size limits. Compliance auditors get an evidence chain; engineers get a deterministic replay.
Read the docs: audit trail
How the agent talks to rConfig, and why you want it that way
A split architecture: rConfig holds the intent, the agents hold the reach. The data path is outbound only TLS with durable local queues on the agent side.
- Vector AgentLinuxParis10.42.0.0/16sshsnmphttp
- Vector AgentLinuxDublin10.77.0.0/16sshsnmphttp
- Vector AgentWindowsNew York10.10.0.0/16sshsnmphttp
Jobs flow outbound only
Agents poll rConfig for work on a configurable ticker. The rConfig server never initiates a connection. Firewall rules stay one directional.
Credentials stay local
Connection parameters are delivered per job over TLS, used once, and never persisted on disk outside the SQLite queue. rConfig holds the source of truth.
Results and logs flow through one channel
Successful job output uploads to api/agentsync/jobs/push. Structured logs replay through api/agentsync/logs/ingest with size bounded batching and 413 aware retries.
Every site scales independently
Agents are stateless except for their local queue. Add a site, drop in a binary, bind an API key. No shared DB, no shared cache, no coordination surface.
What it looks like running in production
The Vector Agent is boring on purpose. systemd starts it, it syncs, it runs, it does not surprise you at 3am. Measured on a reference site collector.
$ systemctl status rconfigvector
● rconfigvector.service - rConfig Vector Agent 1.1.0
Loaded: loaded (/etc/systemd/system/rconfigvector.service; enabled)
Active: active (running) since Wed 2026-04-24 10:12:04 UTC; 12d ago
Main PID: 42881 (rconfigvector)
Tasks: 18 (limit: 4915)
Memory: 62.4M
CPU: 2min 41.103s
CGroup: /system.slice/rconfigvector.service
└─42881 /var/www/html/rconfigvector
Apr 24 10:12:04 par-collector-01 rconfigvector[42881]: ✓ API status check
Apr 24 10:12:04 par-collector-01 rconfigvector[42881]: ✓ Agent settings syncer started
Apr 24 10:12:04 par-collector-01 rconfigvector[42881]: ✓ Log syncer started
Apr 24 10:12:05 par-collector-01 rconfigvector[42881]: worker_count=8 queue_depth=42$ sqlite3 /var/www/html/vectoragent/logs/logs.db \
"SELECT level, message, sent_at FROM logs ORDER BY id DESC LIMIT 5;"
INFO Worker 3 processing job ID 41207 2026-04-24T10:14:02Z
INFO Data sent to Vector Server for 41207 2026-04-24T10:14:02Z
INFO Worker 5 processing job ID 41208 2026-04-24T10:14:03Z
WARN 413 from logs/ingest, splitting batch (n=88) 2026-04-24T10:14:03Z
INFO Recovered batch: 2 sub-batches accepted, 0 quarantined 2026-04-24T10:14:04ZRuntime characteristics
Measured on a 4 vCPU / 4 GB Linux collector, Go 1.24 build, 8 workers, 25 jobs/min sustained. Local queue uses SQLite WAL.
- Cold-start memory
- 18 MB
- Linux, idle worker pool
- Steady-state memory
- 62 MB
- 8 workers × 25 jobs/min
- Binary size
- 14 MB
- linux/amd64, stripped
- Avg SSH fetch
- 241 ms
- Cisco IOS show run
- Queue recovery
- < 2 s
- after agent restart
- Log batch size
- 64 KB
- default LOG_MAX_BATCH_BYTES
- TLS handshake
- 1 RTT
- reused http.Transport
How the Vector Agent compares to other remote collectors
All of these approaches have shipped in production networks. The comparison is about which one to reach for when you need distributed, restart safe, outbound only config collection.
| Capability | Vector Agent | Direct SSH from core | RANCID / Oxidized | Vendor collector proxy | Custom cron scripts |
|---|---|---|---|---|---|
| Outbound only firewall profile | yessupported | nonot supported | nonot supported | partialpartially supported | nonot supported |
| Per-site credential isolation | yessupported | nonot supported | nonot supported | partialpartially supported | nonot supported |
| Durable local queue | yessupported | nonot supported | nonot supported | partialpartially supported | nonot supported |
| Dynamic worker scaling | yessupported | nonot supported | nonot supported | nonot supported | nonot supported |
| SSH · SNMP · Uptime · HTTP | yessupported | yessupported | SSH only | varies | varies |
| Interactive shell SSH quirks (Avaya, HP, Cisco) | yessupported | partialpartially supported | partialpartially supported | partialpartially supported | nonot supported |
| Windows supported | yessupported | nonot supported | nonot supported | partialpartially supported | yessupported |
| Structured, size bounded log sync | yessupported | nonot supported | nonot supported | partialpartially supported | nonot supported |
| Commercial support available | yessupported | nonot supported | nonot supported | yessupported | nonot supported |
If you run a flat single subnet lab, a central collector is probably fine. If your network is segmented, multi site, or multi tenant, a distributed agent is the only approach that scales without adding firewall rules forever.
From nothing to polling devices: three steps
Expand any step to see the exact commands.
Built for rConfig. Backed by 14+ years of NCM engineering.
The Vector Agent is the distributed arm of rConfig, the network configuration management platform trusted by enterprises, MSPs, and government networks. rConfig owns the workflows: scheduling, diffs, compliance, change alerts, RBAC, the API surface. The agent owns the reach.
Ship a central rConfig server. Ship Vector Agents to every site, tenant, or regulated segment that needs local collection. Scaling your coverage stops being a firewall ticket and starts being a package install. That is the model the platform was designed for, and it is the model that keeps working as your estate grows into thousands of devices across dozens of sites.
Need help sizing a deployment, hardening an OT side install, or planning an MSP roll out? The rConfig professional services team does this every week.
- Back office
rConfig Vector
The NCM control plane: schedules, diffs, compliance, RBAC, and the API surface every other layer talks to.
- Data plane
rConfig Vector Agent
The distributed Go collector you're reading about right now. Pulls jobs from Vector, talks to local devices, ships results back over outbound TLS.
- Front office
rConfig Vector Prism
The white-label customer portal. Tag-scoped, MFA-mandatory, fully branded: what your end customers actually log in to.
Maintained by the rConfig engineering team. Last updated: 2026-04-25.
Frequently asked questions
Ship config collection to every site, segment, and tenant
The Vector Agent is a single Go binary that runs as a systemd or Windows service. Deploy it anywhere your devices live, and let rConfig handle the rest.