Pair Infrahub with rConfig. Back up every device your data layer describes. Prove compliance from the validated graph you already trust.
- SourceInfrahub graph
- rConfigDevice record
- Fans out to
- Backup archive
- Diff view
- Compliance report
rConfig reads your Infrahub schema over the GraphQL API, backs up the running configuration of every device your team has modelled, diffs every change, and produces NIS2 or DORA evidence without a custom GraphQL plus Python pipeline to maintain, without a Nornir job to babysit, and without a second device list living somewhere outside the data layer your automation already trusts.
Infrahub 1.0 and laterGraphQL primary, REST where usefulSchema flexible device model
Infrahub gives your automation a validated data layer. It was never built to back up the devices that data describes.
If you run Infrahub, you already know why teams chose it over rigid CMDBs and brittle IPAMs. A schema first data platform built on Neo4j. A user defined model where the team decides what a device is, what an interface looks like, and how the relationships work. Branches, proposed changes and peer review for the data itself, not just the templates around it. OpsMill describes the product as a knowledge graph for infrastructure, and that framing is honest. Infrahub holds intent, validated relationships, and the data your automation reads from.
What Infrahub was never built to do is collect the running configuration from the devices your schema describes. That isn’t a limitation, it’s a scope decision. OpsMill has been clear that Infrahub is a data management platform, not a configuration management tool. The team has stayed disciplined about the boundary. Configuration backup is a separate discipline.
That leaves a gap that shows up the moment an auditor asks what the firewall config looked like in March, or when a change lands at 2am and nobody can work out what was there before. rConfig fills that gap. It consumes your Infrahub data over the GraphQL API, treats whatever schema your team has defined as the authoritative description of the device estate, and captures the running configuration of every device on the schedule you choose.
- Running configuration is not data Infrahub holds. The schema is for intent, relationships and validated state, not for the line by line text of what a Cisco or Juniper device is currently running.
- A custom GraphQL query piped into a Python backup script works until your schema evolves underneath it, and someone has to remember to update the query.
- Nornir plus NAPALM driven by Infrahub data is a legitimate stack if you have the NetDevOps capacity to keep it running.
- An auditor does not want a graph diff. They want a configuration archive that proves what was on the firewall last Friday, with a timestamp and a user.
rConfig is the downstream consumer that respects your data layer. We read from Infrahub, we never write to it, and we produce the configuration evidence the data layer was never trying to.
Infrahub plus rConfig, in three moves.
Sync, capture, prove. Three capabilities that turn a modelled Infrahub estate into an audit ready one. Every device, every configuration, every change.
Sync
Infrahub nodes become rConfig devices automatically, scoped by whichever node type your team modelled. Add a device to the schema, it lands in rConfig on the next sync. Retire one in Infrahub, it's flagged in rConfig for review before any backup job breaks.Capture
Every running configuration, every startup configuration, every change, stored and diffable. 200+ vendors out of the box. No plugin to install on Infrahub. No schema modifications. No GraphQL pipeline to babysit.Prove
Compliance policies run against every device Infrahub describes. NIS2, DORA, PCI-DSS, CIS benchmarks, and anything your security team writes. Evidence exports in minutes.
Infrahub describes the network. rConfig captures what’s running on it. Between the two, your automation has data and your auditor has evidence.
How rConfig syncs with Infrahub, step by step.
Five screens. The extra step versus our other integrations is the schema mapping step, because Infrahub does not assume what a device looks like and neither do we.
- 01Step 1: AuthoriseDrop an Infrahub API token into rConfig’s Integrations screen. Enter the Infrahub URL and an optional branch (defaults to
main). The Test Connection and Test Credentials buttons confirm reachability and scope in seconds. - 02Step 2: IdentifyTell rConfig which Infrahub node types represent the devices you want to back up. Most schemas use
Device,NetworkDevice,InfraDeviceor something the platform team chose during onboarding. rConfig reads schema introspection and lists the node types that look like devices. You confirm. - 03Step 3: ScopeFilter the nodes you want to bring across. Use any attribute or relationship in your schema. Filter by site, role, manufacturer, status, or whichever dimension the team modelled. Most teams start with a handful of test devices on a feature branch and widen from there.
- 04Step 4: MapTranslate Infrahub attributes into rConfig vendors, templates and credentials using tag based mapping. Set it up once, applied on every sync thereafter. Default mappings cover the common shapes, including the OpsMill schema library starting kits and the patterns most teams settle on after their first migration from NetBox or Nautobot.
- 05Step 5: SyncRun it now, schedule it (hourly, daily, weekly), or trigger from the CLI with
php artisan rconfig:integration-infrahub. Idempotent: reruns produce identical state, failures resume cleanly on the next cycle.
The integration is one way. rConfig never writes to Infrahub. The graph stays clean. Once devices are in rConfig, backup, diff and compliance run automatically. Infrahub continues as the data layer your automation reads from.
Built for how Infrahub teams actually work.
The jobs the network team, the security team and the auditor each need from a configuration management layer. No features bolted on for show.
Multi vendor configuration backup
Cisco, Fortinet, Palo Alto, Check Point, Juniper, Arista, Huawei, Nokia, MikroTik, HPE and 190 more. Hourly, daily, or on demand.Change detection with a readable diff
See exactly what changed, line by line, since the last known good version. Filter out noise like timestamps and session identifiers.NIS2, DORA and CIS compliance reporting
Write policies once, run them against every device Infrahub describes. Export evidence your auditor can actually read.One click configuration restore
Push a known good configuration back to any device in under 90 seconds, with approvals and a full audit trail.Bulk configuration deployment
Apply the same template across every device that matches a filter, in a single job with preview and rollback.Full audit trail, exportable on demand
Who changed what, when, from where. The report your auditor asks for takes minutes.
Deploys downstream of Infrahub. Consumes the GraphQL API. Stays out of the graph.
rConfig does not install an Infrahub plugin. It does not modify your schema. It does not register custom node types or push data back into the graph. All interaction happens over the GraphQL API through a token that you scope yourself, against the branch of your choosing (the default is main, but pinning to a release branch or a long lived integration branch is supported). That matters because Infrahub’s data lineage discipline is part of what makes the platform trustworthy, and rConfig is built to leave that lineage exactly as you found it. Source code is available at github.com/opsmill/infrahub under Apache 2.
What rConfig does is what Infrahub was deliberately not built to do: capture configuration from the devices your schema describes, track how it changes, and prove what it looked like when someone needs to know. Self hosted on prem, VM or bare metal, or in your private cloud. Compatible with Infrahub 1.0 and later, Community or Enterprise. Your network team is running it inside 30 minutes on V8 Pro or Vector.
Your data layer.Your operational reality.Your audit trail.
From Infrahub schema to compliance ready archive, in a single sprint.
A European managed services provider runs Infrahub for around 3,800 devices across 47 customer environments. The platform team had spent the better part of a year migrating off a fragile NetBox plus Excel inventory and into an Infrahub schema with Generators producing per customer configuration intent. The migration was the easy part. The auditor turned up six weeks before NIS2 came into force and asked for a configuration archive going back twelve months, per device, with diffs. The team had Infrahub branches showing intent changes and they had Ansible playbooks pushing rendered Artifacts. What they did not have was an answer to “what was actually running on the firewall on 14 March”. rConfig was pointed at their Infrahub Device node type on a feature branch, scoped to a single customer environment, and synced the same afternoon. Within the sprint they had backup, diff and a CIS compliance policy running against every device Infrahub described. The audit went through. The platform team stayed focused on schema work.
Infrahub gave the platform team the data layer. rConfig gave the auditor the evidence. Neither team had to compromise on how they work.
NIS2 and DORA evidence, anchored to the data layer your automation already trusts.
If you’re in scope for NIS2 or DORA, regulators want two things. They want to know the network was configured to match intent, and they want proof of what was actually running on every device when something changed. Infrahub holds the validated intent and the relationships your engineers signed off on through proposed changes. That covers half the audit. It does not cover the other half.
rConfig’s archive holds the operational configuration history for the same device estate, captured directly from the running devices, on a schedule that matches your audit window. Together you get the full picture: intent in Infrahub, reality in rConfig, drift detection policies flagging the gap. When the auditor asks what the ACL on the core router looked like on 14 March, the evidence is already in rConfig’s archive and the intent that produced it is still in the Infrahub branch where it was approved.
rConfig compared to a custom GraphQL backup script, Nornir plus NAPALM, or extending the Infrahub schema yourself.
NetDevOps mature teams can absolutely build configuration backup themselves on top of Infrahub. The data is right there in the GraphQL API. There are three common DIY paths and each one is a legitimate engineering choice given the right team.
The GraphQL plus Python pipeline is the most common starting point. Query the schema for a list of devices, hand the result to a Python script, dump the output to a Git repo and call it a backup. It works. It is also one schema rename away from breaking quietly, and it does not produce the audit ready archive, the readable diff, the RBAC, or the compliance reporting your auditor will ask for.
Nornir plus NAPALM driven by Infrahub data is the more polished version of the same idea. Run a Nornir inventory plugin against Infrahub, drive NAPALM with the result, store outputs in Git or object storage. A platform engineer with NetDevOps experience can stand this up in a sprint and keep it running for years. It still leaves you to build the archive UI, the diff viewer, the compliance policy engine and the SAML SSO yourself, or to glue together separate tools that do each piece. The maintenance bill rises with the device count.
Extending the Infrahub schema with a ConfigBackup node type is the third path. Some teams have done it. OpsMill has not asked anyone to do it, and we would not suggest it either. The graph is for validated data and lineage. Versioned blobs of running config are the wrong shape for it, and you end up either bloating the graph database or maintaining external storage with thin pointers in Infrahub. Neither result is great. rConfig’s positioning is simple. If you have a dedicated platform engineer who can own a custom stack, DIY is reasonable. If you need compliance evidence in days rather than quarters, audit ready RBAC, SAML SSO, and commercial support with SLAs, rConfig gets you there without rewiring how you work.
The Infrahub integration, at a glance.
Everything your architecture review will ask about. Share this section with your security team before the demo.
- rConfig version
- 8.0 or later (V8 Pro, Enterprise or Vector)
- Infrahub versions supported
- 1.0 and later
- Infrahub editions supported
- Community and Enterprise
- Authentication
- Infrahub API token, scoped to read
- Transport
- HTTPS, with optional support for internally signed certificates
- API surface used
- GraphQL (primary), REST for select operations where exposed
- Schema mapping
- Tag based, applied per node type, configurable per sync
- Branch targeting
- Default
main, with optional pinning to any named branch - Filterable fields
- Any attribute or relationship in the customer's Infrahub schema
- Sync triggers
- Manual, scheduled, or CLI
- CLI command
php artisan rconfig:integration-infrahub- Single device CLI
php artisan rconfig:integration-infrahub-single-device {device_id}- Data flow
- One way, Infrahub to rConfig
- Idempotency
- Yes, reruns produce identical state
- Logging
- Every sync logged with user, timestamp, device count, errors
- Infrahub footprint
- Zero (no plugin, no schema changes, no write back)
- High Availability
- Supported (point rConfig at the Infrahub HA endpoint)
- Infrahub documentation
- docs.infrahub.app
- rConfig documentation
- docs.rconfig.com/integrations/device-sync-overview
Questions Infrahub teams ask about network configuration management
See the sync running against your own Infrahub deployment.
Book 30 minutes with an rConfig engineer. We point the integration at a slice of your real Infrahub instance, back up a handful of your actual devices, and run a compliance report against a policy that matters to your team. No generic demo. No slide deck. No sales gate.
We respect your data layer. We do not modify it, write to it, or compete with it. We consume it cleanly and produce the configuration evidence Infrahub was never trying to.