Category: ansible

AWX: ERROR! No inventory was parsed, please check your configuration and options.

Ensure that your playbook sets hosts: all as awx manages your host definition

As per AWX: Troubleshooting no inventory parsed

Well that didn't work still got the same issue.

I found 4 relevant issues on github on the ERROR! No inventory was parsed, please check your configuration and options. issue:

Although there is no clarity there.

I then found this red hat article (behind a login wall) that tells you that you need to set the inventory plugin as an environment variable:

Solution

It looks like this has to do with the custom venv.

I did not follow the documentation properly and teh venv needs a few base dependencies that i did not install - namely psutils and ANSIBLE - Einstein!!! What a tool I am.

Anyway actviate the virtualenv and run:

pip install psutil
pip install -U "ansible == 2.9.5"

Ensure it has the correct permissions

sudo chmod 0755 /opt/awx_venvs/

After doing that running my playbook I still got an error:

No such file or directory: b'/opt/awx_venvs/provisioning/bin/ansible-playbook'

So I looked at the awx custom venv docs.

I followed the steps but received the same error. The virtualenv setup on the host was not correctly setup within the container.

So I needed to look at the Container based custom venv documentation

Wait scratch that it looks too tough

Custom VirtualEnv as Vars

I looked at Updating venv in additional vars, according to ryanpetrollo on this github issue.

So I added my venv_vars.yaml:

And ran the isntall again but the venvs were not created

So I stopped and removed the containers:

sudo docker stop awx_task awx_web awx_memcached awx_redis awx_postgres

and removed them:

sudo docker rm awx_task awx_web awx_memcached awx_redis awx_postgres

then ran:

ansible-playbook -i inventory install.yml --extra-vars "@venv_vars.yaml"

Still the venv was not created.

Looking through the code you need to be deploying the kubernetes way version for this to work.

So I used kubernetes and it worked (it's also better this way as requirements are in code)

I had to add an extra variable to venv_vars.yml;

---

custom_venvs_path: "/opt/awx_venvs"

custom_venvs:
  - name: provisioning
    python: python3
    python_ansible_version: 2.9.5
    python_modules:
      - jmespath
      - netaddr
      - pyvcloud

Sources

Walkthough of Creating and Running Plays on AWX

AWX Ad Hoc Test

The first step before you do anything on AWX, is just get your toes wet and do a simple ad hoc command locally.

To do this got to Inventories -> +

Call it localhost. Next you have to actually add hosts or groups to this inventory.

To do this edit the inventory and go to hosts -> + and then put the hostname as localhost. It is very important that you add in the host variables:

ansible_connection: local

If you do not add that local connection, you will use ssh isntead and won't be able to connect

awx-inventory-for-localhost

Now go back to the hosts page, select the host you want to run an ad hoc command on. Then select Run Commands

awx-ad-hoc-run-commands-on-a-host

Then use the module ping which connects to a host, checks there is a usable python and then returns pong

awx-localhost-ping

The output of the command should be:

awx-successful-local-ping

But Can you ICMP Ping 1.1.1.1

Depending on the way you deployed, this might not work. So try it out, using the command module and doing a ping -c 4 1.1.1.1.

awx-ping-cloudflare

If you are running on kubernetes and the container running the task does not have the ping utility you will get:

localhost | FAILED | rc=2 >>
ping: socket: Operation not permittednon-zero return code

then if you run it with privilege escalation you get:

{
    "module_stdout": "",
    "module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1,
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "_ansible_no_log": false,
    "changed": false
}

Running this same command without privilege escalation on an older version of AWX running with deployed with docker-compose you get a success:

awx-successful-ping

However, running on k8s is actually preferred. You might not have access to some standard tools on the docker deploy but you will hardly need them - I think.

Walkthrough of Setting up your playbook to Run

There is a bit of terminology that is AWX (Ansible tower) specific. That is a bit different from pure ansible. We will cross that bridge when we get their though.

The first thing to do is ensure your playbooks are in a git repo.

So what a repo is called in Asnsible is a project.
A project is a Logical collection of ansible playbooks. Although sane people keep these in git repos.

But wait, to access that repo you need to setup a Source control credential first.

So the flow is:

  1. Create a Credential for Source Control
  2. Create a Project
    ...

1. Setup Credentials (for gitlab source control)

First create a ssh key pair for awx.
Using ssh-keygen -t rsa -b 4096 -C "your_email@example.com" and store it as awx_key for example.
Then copy the private key.

Click Credentials on the side -> + and add set the credential type to Source Control. Then Add your private key.

awx-gitlab-scm-privatekey

In gitlab you need to go to your: Repo -> settings -> repository -> Deploy Keys (You can use Deploy tokens if you do not want to use ssh - only https).
Ensure the key is enabled.

2. Create Project

Go to Projects -> +

Set the SCM details and selecting the gitlab scm credentials.

Save, and then repo should eventually be pulled -> shown by a green light.

awx-create-a-project

3. Create a Job Template

You can only create a job template if you have a project. A job template basically links up the inventory (variables), credentials and playbook you are going to run.

Go to Templates -> + -> Job Templates

awx-job-template

4. Run your Job

Run the job template by pressing the Launch button

Extra: Using a Survey

Surveys set extra variables in a user-friendly question and answer way

  1. Click Create Survey on the job Template

awx-add-survey

Now you can add questions to the user and it will fill them out in extra vars.

Sources

Installing Ansible AWX (Tower) on Kuberenetes 1.17 (Rancher)

AWX release versions don't link up that well with ansible towers. The problem with that is that when reading user and admin docs of ansible tower, the versions don't link up.

Anyway AWX updates versions like wildfire so we are now on AWX version 11, this is probably outdated now so check the latest AWX releases on github.

The documentation goes through the basics of installing AWX.

Prerequisites

You need on your local:

  • git
  • python3.8
  • ansible 2.8+
  • docker python module
  • node 10x and npm 6x (but try without this first)

Resources Specs

Then you will need a kubernetes cluster with the following resources available to workers (not control plane nodes)

  • 4GB memory
  • 2 CPU cores
  • 20GB of space

External DB

I will use the external DB method as it feels a bit safer and I'm not that great with k8s peristent storage volumes and ScaleSet.

So for that you need to install postgres 9.6+ on an accessible vm.

Follow the installation instructions on the postgres website for postgres 9.6+ greater on your server

You will need to create a database and create a remote awx user

sudo su postgres -
createdb awx
psql
create user awx with encrypted password 'awxpass';
grant all privileges on database awx to awx;

Then allow remove access, find where your config files are:

psql -c 'SHOW config_file'
sudo vim /var/lib/pgsql/12/data/postgresql.conf

set

listen_addresses = '*' 

then:

sudo vim /var/lib/pgsql/12/data/pg_hba.conf

set:

host all all 0.0.0.0/0 md5

Restart:

systemctl start postgresql-12

Lastly, allow the port on the firewall

Steps

After setting up the above

  1. Clone the repo locally

    git clone git@github.com:ansible/awx.git

or

    wget https://github.com/ansible/awx/archive/11.0.0.tar.gz
    tar -xf 11.0.0.tar.gz
  1. Edit awx/installer/inventory , provide values for kubernetes_context and kubernetes_namespace

  2. Uncomment and set your external postgres details

    pg_hostname=postgresql
    pg_username=awx
    pg_password=awxpass
    pg_database=awx
    pg_port=5432
  3. Change the admin username and admin password in the inventory

  4. Then run the playbook

    ansible-playbook -i inventory install.yml

Post install you should be able to see the pods

    $ kubectl get pods --namespace awx
    NAME                       READY   STATUS    RESTARTS   AGE
    ansible-tower-management   1/1     Running   0          5m39s
    awx-7586cffcfb-q2lhl       4/4     Running   0          5m59s

View the availble services

kubectl get svc --namespace awx

View the ingress

kubectl get ing --namespace awx

The tricky part is actually accessing the box. I still need to work on this.

So it needs a public ip (or at least an accessible ip) then point your dns (either local in /etc/hosts or proper) to it..

ie. mysite.example.com { rancher_cluster_ip }

Then create a load balancer in rancher or edit the ingress to use the dns name

Sources