Skip to content

Inventory plugin (DCNE-302) #721

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
164 changes: 164 additions & 0 deletions plugins/inventory/aci.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
# Copyright (c) 2025 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)

from __future__ import absolute_import, division, print_function

__metaclass__ = type

DOCUMENTATION = r"""
name: aci
short_description: Cisco aci inventory plugin
extends_documentation_fragment:
- cisco.aci.aci
- constructed
description:
- Query details from APIC
- Gets details on all spines and leafs behind the controller.
- Requires a YAML configuration file whose name ends with 'cisco_aci.(yml|yaml)'
"""

EXAMPLES = """
---
plugin: cisco.aci.aci
host: 192.168.1.90
username: admin
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was doing some local testing with your code and the normal arguments from the aci collection seem to work as expected.

I do however have an additional question regarding plugin usage and the arguments exposed / used for authentication. For our normal modules we also allow users the specify HTTPAPI connection plugin to limit the amount of login requests send. This allows for a user to specify some additional authentication arguments to be set in inventory which are in that case not needed to be specified in each task. See some explanation in the repository: https://github.com/CiscoDevNet/ansible-aci/blob/master/docs/optimizing.md#using-the-aci-httpapi-plugin.

Is there a way to leverage this plugin also for the inventory plugin? If not, is there a way to expose these arguments to also be valid inputs for this plugin? For instance by updating the aliases from aci_argument_spec in this plugin.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am just looking at this documentation to understand how it works.

I do see that I probably need to rename this module though.. I see this

ansible_network_os=cisco.aci.aci

Do you have an opinion on what this inventory plugin should be called? cisco.aci.aci_inv ??

Copy link
Collaborator

@akinross akinross Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally prefer cisco.aci.aci_inventory or cisco.aci.inventory if it requires to be changed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok - I will use cisco.aci.aci_inventory

I did some quick investigation and it looks like ansible_connection is not used from inventory module. But we can generate an inventory that will use that for the devices found..

(venv-3.9) [root@okd-master-0 aci-inv]# cat connection_cisco_aci.yml 
---
plugin: cisco.aci.aci_inventory
host: 192.168.1.90
username: admin
#password: OR you can use env variable ACI_PASSWORD
validate_certs: false

compose:
  ansible_connection: "'ansible.netcommon.httpapi'"
  ansible_network_os: "'cisco.aci.aci'"
  ansible_host: "'192.168.1.90'"

keyed_groups:
  - prefix: role
    key: role
(venv-3.9) [root@okd-master-0 aci-inv]# ansible-inventory -i connection_cisco_aci.yml --list -v
No config file found; using defaults
Using inventory plugin 'ansible_collections.cisco.aci.plugins.inventory.aci' to process inventory source '/root/aci-inv/connection_cisco_aci.yml'
{
    "_meta": {
        "hostvars": {
            "TEP-1-101": {
                "address": "10.0.216.64",
                "ansible_connection": "ansible.netcommon.httpapi",
                "ansible_host": "192.168.1.90",
                "ansible_network_os": "cisco.aci.aci",
                "bootstrapState": "done",
                "childAction": "",
                "clusterTimeDiff": "-5",
                "configIssues": "",
                "controlPlaneMTU": "9000",
                "currentTime": "2025-02-04T15:44:36.025+00:00",
                "dn": "topology/pod-1/node-101/sys",
                "enforceSubnetCheck": "no",
                "etepAddr": "0.0.0.0",
                "fabricDomain": "ACI Fabric1",
                "fabricId": "1",
                "fabricMAC": "00:22:BD:F8:19:FF",
                "id": "101",
                "inbMgmtAddr": "0.0.0.0",
                "inbMgmtAddr6": "::",
                "inbMgmtAddr6Mask": "0",
                "inbMgmtAddrMask": "0",
                "inbMgmtGateway": "0.0.0.0",
                "inbMgmtGateway6": "::",
                "lastRebootTime": "2025-01-23T23:00:38.276+00:00",
                "lastResetReason": "unknown",
                "lcOwn": "local",
                "modTs": "2025-01-30T14:50:54.106+00:00",
                "mode": "unspecified",
                "monPolDn": "uni/fabric/monfab-default",
                "name": "TEP-1-101",
                "nameAlias": "",
                "nodeType": "unspecified",
                "oobMgmtAddr": "0.0.0.0",
                "oobMgmtAddr6": "::",
                "oobMgmtAddr6Mask": "0",
                "oobMgmtAddrMask": "0",
                "oobMgmtGateway": "0.0.0.0",
                "oobMgmtGateway6": "::",
                "podId": "1",
                "remoteNetworkId": "0",
                "remoteNode": "no",
                "rlOperPodId": "1",
                "rlRoutableMode": "no",
                "rldirectMode": "no",
                "role": "leaf",
                "serial": "TEP-1-101",
                "serverType": "unspecified",
                "siteId": "0",
                "state": "in-service",
                "status": "",
                "systemUpTime": "11:16:43:58.000",
                "tepPool": "10.0.0.0/16",
                "unicastXrEpLearnDisable": "no",
                "version": "simsw-5.2(5c)",
                "virtualMode": "no"
            },
            "TEP-1-103": {
                "address": "10.0.216.65",
                "ansible_connection": "ansible.netcommon.httpapi",
                "ansible_host": "192.168.1.90",
                "ansible_network_os": "cisco.aci.aci",
                "bootstrapState": "done",
                "childAction": "",
                "clusterTimeDiff": "-8",
                "configIssues": "",
                "controlPlaneMTU": "9000",
                "currentTime": "2025-02-04T15:44:36.028+00:00",
                "dn": "topology/pod-1/node-103/sys",
                "enforceSubnetCheck": "no",
                "etepAddr": "0.0.0.0",
                "fabricDomain": "ACI Fabric1",
                "fabricId": "1",
                "fabricMAC": "00:22:BD:F8:19:FF",
                "id": "103",
                "inbMgmtAddr": "0.0.0.0",
                "inbMgmtAddr6": "::",
                "inbMgmtAddr6Mask": "0",
                "inbMgmtAddrMask": "0",
                "inbMgmtGateway": "0.0.0.0",
                "inbMgmtGateway6": "::",
                "lastRebootTime": "2025-01-23T23:00:38.435+00:00",
                "lastResetReason": "unknown",
                "lcOwn": "local",
                "modTs": "2025-01-30T14:55:59.013+00:00",
                "mode": "unspecified",
                "monPolDn": "uni/fabric/monfab-default",
                "name": "TEP-1-103",
                "nameAlias": "",
                "nodeType": "unspecified",
                "oobMgmtAddr": "0.0.0.0",
                "oobMgmtAddr6": "::",
                "oobMgmtAddr6Mask": "0",
                "oobMgmtAddrMask": "0",
                "oobMgmtGateway": "0.0.0.0",
                "oobMgmtGateway6": "::",
                "podId": "1",
                "remoteNetworkId": "0",
                "remoteNode": "no",
                "rlOperPodId": "1",
                "rlRoutableMode": "yes",
                "rldirectMode": "yes",
                "role": "spine",
                "serial": "TEP-1-103",
                "serverType": "unspecified",
                "siteId": "0",
                "state": "in-service",
                "status": "",
                "systemUpTime": "11:16:43:58.000",
                "tepPool": "10.0.0.0/16",
                "unicastXrEpLearnDisable": "no",
                "version": "simsw-5.2(5c)",
                "virtualMode": "no"
            },
            "apic1": {
                "address": "10.0.0.1",
                "ansible_connection": "ansible.netcommon.httpapi",
                "ansible_host": "192.168.1.90",
                "ansible_network_os": "cisco.aci.aci",
                "bootstrapState": "none",
                "childAction": "",
                "clusterTimeDiff": "0",
                "configIssues": "",
                "controlPlaneMTU": "9000",
                "currentTime": "2025-02-04T15:44:36.021+00:00",
                "dn": "topology/pod-1/node-1/sys",
                "enforceSubnetCheck": "no",
                "etepAddr": "0.0.0.0",
                "fabricDomain": "ACI Fabric1",
                "fabricId": "1",
                "fabricMAC": "00:22:BD:F8:19:FF",
                "id": "1",
                "inbMgmtAddr": "192.168.11.1",
                "inbMgmtAddr6": "fc00::1",
                "inbMgmtAddr6Mask": "0",
                "inbMgmtAddrMask": "24",
                "inbMgmtGateway": "192.168.11.254",
                "inbMgmtGateway6": "::",
                "lastRebootTime": "2025-01-23T23:00:38.028+00:00",
                "lastResetReason": "unknown",
                "lcOwn": "local",
                "modTs": "2025-01-23T23:04:21.394+00:00",
                "mode": "unspecified",
                "monPolDn": "uni/fabric/monfab-default",
                "name": "apic1",
                "nameAlias": "",
                "nodeType": "unspecified",
                "oobMgmtAddr": "192.168.1.90",
                "oobMgmtAddr6": "fe80::200:ff:fe0:0",
                "oobMgmtAddr6Mask": "0",
                "oobMgmtAddrMask": "24",
                "oobMgmtGateway": "192.168.1.3",
                "oobMgmtGateway6": "2001:420:28e:2020:acc:68ff:fe28:b540",
                "podId": "1",
                "remoteNetworkId": "0",
                "remoteNode": "no",
                "rlOperPodId": "0",
                "rlRoutableMode": "no",
                "rldirectMode": "no",
                "role": "controller",
                "serial": "TEP-1-1",
                "serverType": "unspecified",
                "siteId": "0",
                "state": "in-service",
                "status": "",
                "systemUpTime": "11:16:43:58.000",
                "tepPool": "0.0.0.0",
                "unicastXrEpLearnDisable": "no",
                "version": "5.2(5c)",
                "virtualMode": "no"
            }
        }
    },
    "all": {
        "children": [
            "ungrouped",
            "role_controller",
            "role_leaf",
            "role_spine"
        ]
    },
    "role_controller": {
        "hosts": [
            "apic1"
        ]
    },
    "role_leaf": {
        "hosts": [
            "TEP-1-101"
        ]
    },
    "role_spine": {
        "hosts": [
            "TEP-1-103"
        ]
    }
}

Copy link
Collaborator

@akinross akinross Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok thanks, should we add compose to the example?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am happy to. Is there additional argument that should be set to make it a fully working example? Is the example I provided enough?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me the provided example was clear, so would suggest add two examples with a comment above. 1. minimal version, 2. compose example. Think that would make it easier for users to understand compose.

password: PASSWORD
validate_certs: false

keyed_groups:
- prefix: role
key: role
"""

import atexit
import time
import tempfile
import shutil
import typing as t
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable
from ansible_collections.cisco.aci.plugins.module_utils.aci import (
ACIModule,
aci_argument_spec,
)
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.module_utils.common.text.converters import to_native
from ansible.errors import AnsibleError
from ansible.utils.display import Display

display = Display()


class MockAnsibleModule(object):
def __init__(self, argument_spec, parameters):
"""Mock AnsibleModule

This is needed in order to use the aci methods which assume to be working
with a module only.
"""

self._socket_path = None
self._debug = False
self._diff = False
self._tmpdir = None
self.check_mode = False
self.params = dict()

validator = ArgumentSpecValidator(argument_spec)
result = validator.validate(parameters)

if result.error_messages:
display.vvv("Validation failed: {0}".format(", ".join(result.error_messages)))

self.params = result.validated_parameters

@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile)
except (OSError, IOError) as e:
self.fail_json(msg="Failed to create remote module tmp path with prefix %s: %s" % (basefile, to_native(e)))
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir

return self._tmpdir

def warn(self, warning):
display.vvv(warning)

def fail_json(self, msg, **kwargs) -> t.NoReturn:
raise AnsibleError(msg)


class InventoryModule(BaseInventoryPlugin, Constructable):

NAME = "cisco.aci.aci"

def verify_file(self, path):
"""return true/false if this is possibly a valid file for this plugin to consume"""
valid = False
if super(InventoryModule, self).verify_file(path):
# base class verifies that file exists and is readable by current user
if path.endswith(("cisco_aci.yaml", "cisco_aci.yml")):
valid = True
return valid

def parse(self, inventory, loader, path, cache=True):

# call base method to ensure properties are available for use with other helper methods
super(InventoryModule, self).parse(inventory, loader, path, cache)

# this method will parse 'common format' inventory sources and
# update any options declared in DOCUMENTATION as needed
config = self._read_config_data(path)
config.update(state="query")

argument_spec = aci_argument_spec()
argument_spec.update(
id=dict(type="int", aliases=["controller", "node"]),
state=dict(type="str", default="query", choices=["query"]),
keyed_groups=dict(type="list"),
plugin=dict(type="str"),
)

module = MockAnsibleModule(
argument_spec=argument_spec,
parameters=config,
)

aci = ACIModule(module)
aci.construct_url(root_class=dict(aci_class="topSystem"))

aci.get_existing()

# parse data and create inventory objects:
for device in aci.existing:
attributes = device.get("topSystem", {}).get("attributes")
if attributes.get("name"):
self.add_host(attributes.get("name"), attributes)

def add_host(self, hostname, host_vars):
self.inventory.add_host(hostname, group="all")

if host_vars.get("oobMgmtAddr", "0.0.0.0") != "0.0.0.0":
self.inventory.set_variable(hostname, "ansible_host", host_vars.get("oobMgmtAddr"))
elif host_vars.get("inbMgmtAddr", "0.0.0.0") != "0.0.0.0":
self.inventory.set_variable(hostname, "ansible_host", host_vars.get("inbMgmtAddr"))
else:
self.inventory.set_variable(hostname, "ansible_host", host_vars.get("address"))

for var_name, var_value in host_vars.items():
self.inventory.set_variable(hostname, var_name, var_value)

strict = self.get_option("strict")

# Add variables created by the user's Jinja2 expressions to the host
self._set_composite_vars(self.get_option("compose"), host_vars, hostname, strict=True)

# Create user-defined groups using variables and Jinja2 conditionals
self._add_host_to_composed_groups(self.get_option("groups"), host_vars, hostname, strict=strict)
self._add_host_to_keyed_groups(self.get_option("keyed_groups"), host_vars, hostname, strict=strict)