NetBox integration for Ansible

Keeping static Ansible inventories in Git works, until it doesn’t: hosts get renamed, IPs change, VMs come and go, and suddenly your repository history is 80% inventory churn. NetBox solves this by becoming the single-source-of-truth (SSoT) for devices, VMs, sites, and metadata. With the official Ansible inventory plugin, you can pull your inventory dynamically from NetBox and keep playbooks focused on automation, not bookkeeping.

This post is for engineers running NetBox already (or planning to) and operating Ansible across multiple sites. The goal is simple: fewer changes in the Ansible repository, and better compliance and documentation because all devices and VMs live in one authoritative place.

Context and requirements

Assumptions about the environment

  • NetBox is deployed in Docker on a central monitoring host.
  • NetBox is accessible via HTTPS behind an NGINX reverse proxy.
  • NetBox is reachable from each site where Ansible runs (routing + firewall permits it).

Prerequisites

  • A working NetBox instance with API access enabled.
  • The Ansible NetBox collection installed (on each Ansible execution host).
  • A NetBox API token with sufficient read permissions (and ideally limited scope).
  • A consistent tagging / metadata strategy in NetBox (roles, sites, tags, tenants, etc.).

Solution overview

The architecture is straightforward:

  • NetBox holds the authoritative inventory and metadata (devices, VMs, IPs, tags, custom fields, config context).
  • Ansible uses the official netbox.netbox.nb_inventory inventory plugin to fetch hosts at runtime.
  • Hosts are grouped automatically (for example by device roles, sites, tags), enabling clean targeting in playbooks.
  • Filtering is applied so only relevant systems show up in Ansible (for example status=active and ansible_managed=true).

The operational benefit: instead of editing inventory.ini or hosts.yaml in Git for every change, you update NetBox. Ansible picks it up on the next run. Git stays calm. Your future self stays calmer.

Step by step implementation

1) Install the NetBox collection

On the Ansible host (per site), install the official collection:

ansible-galaxy collection install netbox.netbox

If you pin dependencies (recommended for repeatability), do it via requirements.yml in your automation repository.

2) Create the inventory plugin configuration

Create a file like inventory/netbox.yml:

plugin: netbox.netbox.nb_inventory

# Connection information to NetBox
api_endpoint: https://netbox.example.com
token: "<TOKEN>" # v1 Token required

# Plugin configuration
config_context: true

group_by:
  - device_roles
  - sites
  - tags

# Limit hosts to search
query_filters:
  - status: active
  - cf_ansible_managed: true # (Custom field "ansible_managed")

What this does:

  • config_context: true pulls in NetBox config context data as host vars (useful, but keep it tidy).
  • group_by creates Ansible groups automatically based on NetBox metadata.
  • query_filters ensures only hosts you actually manage appear in inventory.

3) Test inventory generation

Run:

ansible-inventory -i inventory/netbox.yml --graph

And inspect hostvars for a specific host:

ansible-inventory -i inventory/netbox.yml --host <hostname>

If this works, you now have dynamic inventory. If it fails, jump to the troubleshooting section below.

Caveats (things that will trip you once)

V1 token required

A v1 NetBox token is required for this plugin in the described setup. v2 tokens will not work to access the required information (you will typically see authentication or permission failures).

Operational takeaway: document which token type you need, and do not rotate it blindly during a maintenance window unless you enjoy debugging inventory at 02:00.

Field titles are not always intuitive

NetBox fields and Ansible plugin field mappings can be a bit non-obvious at first (especially around custom fields and nested data). Plan a short discovery phase:

  • Pull a host with ansible-inventory --host ...
  • Observe what variables you actually get
  • Standardize names and patterns you want to rely on in playbooks

Security considerations

Use environment variables for token and URL

Do not hardcode secrets into inventory/netbox.yml. Prefer environment variables, which also integrates cleanly with GitLab CI/CD.

Two practical approaches:

  • Approach A (recommended): Export variables in the runner / host environment and reference them in your inventory config.
  • Approach B: Inject them at runtime in CI (masked variables) and keep the config file generic.

Exact syntax depends on how you template inventory files (static file vs rendered file). If you want, I can provide a concrete GitLab CI example that renders inventory/netbox.yml from a template.

Restrict access to NetBox

Treat NetBox as sensitive infrastructure metadata (because it is). Practical controls:

  • Restrict inbound access to HTTPS (reverse proxy) to trusted subnets (for example site monitoring networks).
  • Use least-privilege API tokens (read-only where possible).
  • Log API access at the reverse proxy level (NGINX) for auditability.

Integrating NetBox inventory into playbooks

Once hosts are grouped by NetBox metadata, playbooks become simpler and more expressive.

Using tags for targeting devices and VMs

A clean pattern is to use NetBox tags to represent automation intent:

  • ansible_managed=true (custom field) to opt-in systems
  • tags like linuxwindowsedgedbmonitoringbackup-client
  • platform or tenant metadata for higher-level grouping

Then in Ansible:

  • target groups created by group_by: tags
  • or target groups created by device_roles / sites

This avoids hand-maintained inventory groups in Git, and shifts grouping decisions to the system that already knows the infrastructure.

Grouping by tenants, platforms, tags, sites

NetBox is good at modelling:

  • tenants (who owns it),
  • sites (where it lives),
  • roles (what it does),
  • tags (what you want to do to it),
  • platforms (how you manage it).

Using these dimensions for Ansible grouping is one of the biggest wins: you get consistent targeting without re-implementing inventory logic in YAML.

Operations: backups (NetBox is now critical)

If NetBox becomes your SSoT, it becomes a production dependency. That means backups are not optional.

NetBox backup basics

At minimum, back up the PostgreSQL database that stores NetBox data. The simplest reliable baseline is:

  • periodic PostgreSQL dumps
  • tested restores (a backup that cannot be restored is just expensive storage)

If you run NetBox in Docker, ensure your backup approach fits the containerized deployment model (backup job location, credentials, retention, and offsite copy).

Common pitfalls and troubleshooting

  • Inventory is empty: verify query_filters and confirm status=active and cf_ansible_managed=true are actually set on objects in NetBox.
  • Auth failures: confirm you are using a v1 token and that the reverse proxy forwards required headers and does not block API paths.
  • Unexpected grouping: run ansible-inventory --graph and inspect how tags/roles/sites are represented; adjust group_by accordingly.
  • Too many variables from config context: keep config context structured and minimal; do not dump everything into context just because you can.

Conclusion

Using NetBox as a single source of truth combined with Ansible dynamic inventory gives you a practical, scalable way to manage infrastructure across sites. You reduce inventory-related churn in Git, improve compliance posture by documenting devices and VMs in one place, and gain flexibility by driving automation from consistent metadata (roles, sites, tags, tenants, platforms).

It is not magic, but it is close to the kind of boring, predictable automation that operations teams tend to like.

References:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.