Category: auto-remediation

Walkthough of Creating and Running Plays on AWX

AWX Ad Hoc Test

The first step before you do anything on AWX, is just get your toes wet and do a simple ad hoc command locally.

To do this got to Inventories -> +

Call it localhost. Next you have to actually add hosts or groups to this inventory.

To do this edit the inventory and go to hosts -> + and then put the hostname as localhost. It is very important that you add in the host variables:

ansible_connection: local

If you do not add that local connection, you will use ssh isntead and won't be able to connect

awx-inventory-for-localhost

Now go back to the hosts page, select the host you want to run an ad hoc command on. Then select Run Commands

awx-ad-hoc-run-commands-on-a-host

Then use the module ping which connects to a host, checks there is a usable python and then returns pong

awx-localhost-ping

The output of the command should be:

awx-successful-local-ping

But Can you ICMP Ping 1.1.1.1

Depending on the way you deployed, this might not work. So try it out, using the command module and doing a ping -c 4 1.1.1.1.

awx-ping-cloudflare

If you are running on kubernetes and the container running the task does not have the ping utility you will get:

localhost | FAILED | rc=2 >>
ping: socket: Operation not permittednon-zero return code

then if you run it with privilege escalation you get:

{
    "module_stdout": "",
    "module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1,
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "_ansible_no_log": false,
    "changed": false
}

Running this same command without privilege escalation on an older version of AWX running with deployed with docker-compose you get a success:

awx-successful-ping

However, running on k8s is actually preferred. You might not have access to some standard tools on the docker deploy but you will hardly need them - I think.

Walkthrough of Setting up your playbook to Run

There is a bit of terminology that is AWX (Ansible tower) specific. That is a bit different from pure ansible. We will cross that bridge when we get their though.

The first thing to do is ensure your playbooks are in a git repo.

So what a repo is called in Asnsible is a project.
A project is a Logical collection of ansible playbooks. Although sane people keep these in git repos.

But wait, to access that repo you need to setup a Source control credential first.

So the flow is:

  1. Create a Credential for Source Control
  2. Create a Project
    ...

1. Setup Credentials (for gitlab source control)

First create a ssh key pair for awx.
Using ssh-keygen -t rsa -b 4096 -C "your_email@example.com" and store it as awx_key for example.
Then copy the private key.

Click Credentials on the side -> + and add set the credential type to Source Control. Then Add your private key.

awx-gitlab-scm-privatekey

In gitlab you need to go to your: Repo -> settings -> repository -> Deploy Keys (You can use Deploy tokens if you do not want to use ssh - only https).
Ensure the key is enabled.

2. Create Project

Go to Projects -> +

Set the SCM details and selecting the gitlab scm credentials.

Save, and then repo should eventually be pulled -> shown by a green light.

awx-create-a-project

3. Create a Job Template

You can only create a job template if you have a project. A job template basically links up the inventory (variables), credentials and playbook you are going to run.

Go to Templates -> + -> Job Templates

awx-job-template

4. Run your Job

Run the job template by pressing the Launch button

Extra: Using a Survey

Surveys set extra variables in a user-friendly question and answer way

  1. Click Create Survey on the job Template

awx-add-survey

Now you can add questions to the user and it will fill them out in extra vars.

Sources

Stackstorm using Configuration variables in python runner actions

What do you do when you don't want a caller of the api to know the connection credentials to a service on stackstorm? But also don't want to have to set these variables in the environment when calling the action from a rule?

You should use pack configuration

Setup your Pack Config Schema

You need to set a pack configuration schema first.


host:
    description: "ip or hostname of fortimail server"
    type: "string"
    required: true
user:
    description: "name of user logging in to fortimail"
    type: "string"
    secret: true
    required: true
password:
    description: "password of user logging in to fortimail"
    type: "string"
    required: true
    secret: true

 

Configure your Config Interactively

Instead of setting the credentials in a file (IaC) you can configure a pack interactively with:

st2 pack config cloudflare

The generated file will be created at:

/opt/stackstorm/configs/<pack>.yaml

Using Pack ConfigurationĀ  in Actions

You can use config_context to access the pack config variables:


---
name: "send_sms"
runner_type: "python-script"
description: "This sends an SMS using twilio."
enabled: true
entry_point: "send_sms.py"
parameters:
    from_number:
        type: "string"
        description: "Your twilio 'from' number in E.164 format. Example +14151234567."
        required: false
        position: 0
        default: "{{config_context.from_number}}"

Get Pack Config from a Python Runner

If you want to get a pack config value from the python runner you can use:

Within def run(self, variable1, variable2):


if self.config.get('hosts', None):
    _hosts = self.config['hosts']
else:
    raise ValueError("Need to define 'hosts' in either action or in config")

Provided you are extending from st2common.runners.base_action.Action

Introduction to Alerta: Open Source Aggregated Alerts

There are a number of platforms available these days to assist operations in terms of dealing with alerts. Namely Pagerduty, VictorOps and OpsGenie. These are unfortunately pay for tools/

These tools are known as monitoring aggregation

I was looking through the integrations of elastalert and found that there is an integration for alerta.io, so I checked the website and it seemed to check all the boxed of monitoring aggregation.

I used the docker compose way of setting it up quickly, but if you want to set it up proper then follow the alerta.io deployment guide.

Update some config:


docker exec -u root -it alerta_web_1 /bin/bash
apt update
apt install vim
# Edit the config in /app/alertad.conf
# Restart the container

Add the housekeeping cron job:


echo "* * * * * root /venv/bin/alerta housekeeping" >/etc/cron.daily/alerta

The default timeout period for an alert is 86400 seconds, or one day.

Check out the alerta plugins

What popular alerting and monitoring tools does alerta.io integrate with?