Category: stackstorm

Stackstorm using Configuration variables in python runner actions

What do you do when you don't want a caller of the api to know the connection credentials to a service on stackstorm? But also don't want to have to set these variables in the environment when calling the action from a rule?

You should use pack configuration

Setup your Pack Config Schema

You need to set a pack configuration schema first.

    description: "ip or hostname of fortimail server"
    type: "string"
    required: true
    description: "name of user logging in to fortimail"
    type: "string"
    secret: true
    required: true
    description: "password of user logging in to fortimail"
    type: "string"
    required: true
    secret: true


Configure your Config Interactively

Instead of setting the credentials in a file (IaC) you can configure a pack interactively with:

st2 pack config cloudflare

The generated file will be created at:


Using Pack Configuration  in Actions

You can use config_context to access the pack config variables:

name: "send_sms"
runner_type: "python-script"
description: "This sends an SMS using twilio."
enabled: true
entry_point: ""
        type: "string"
        description: "Your twilio 'from' number in E.164 format. Example +14151234567."
        required: false
        position: 0
        default: "{{config_context.from_number}}"

Get Pack Config from a Python Runner

If you want to get a pack config value from the python runner you can use:

Within def run(self, variable1, variable2):

if self.config.get('hosts', None):
    _hosts = self.config['hosts']
    raise ValueError("Need to define 'hosts' in either action or in config")

Provided you are extending from st2common.runners.base_action.Action

Add a simple custom action and action alias to stackstorm

In this post I will demonstrate adding a ping action to stackstorm. Then make that action available from chatops (slack) using an action alias.

The Scenario

On a team of network engineers often certain ip addresses need to be checked if they are accessible. This is done using the ping command. If an engineers would like to give visibility on the status of that ping command, she could make that ping command via her chat application. Then everyone in the same room would be able to see the result.

Create the Ping Action

Lets check what this action does, ssh into your stackstorm instance and run:

[cent@st2 packs]$ ping -c 4
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=59 time=3.15 ms
64 bytes from icmp_seq=2 ttl=59 time=2.99 ms
64 bytes from icmp_seq=3 ttl=59 time=2.69 ms
64 bytes from icmp_seq=4 ttl=59 time=2.73 ms

--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 2.696/2.894/3.157/0.201 ms

this is what we want to do, but instead of manually typing the command we want stackstorm to do the action.

Let us use the core.local action to run the command:

[cent@st2 packs]$ st2 run core.local -- ping -c 4
id: 5cdeb8bb52364c6d5cb1d90f
status: succeeded
  cmd: ping -c 4
  failed: false
  return_code: 0
  stderr: ''
  stdout: 'PING ( 56(84) bytes of data.
    64 bytes from icmp_seq=1 ttl=59 time=3.17 ms
    64 bytes from icmp_seq=2 ttl=59 time=3.09 ms
    64 bytes from icmp_seq=3 ttl=59 time=2.89 ms
    64 bytes from icmp_seq=4 ttl=59 time=2.75 ms

    --- ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3003ms
    rtt min/avg/max/mdev = 2.759/2.981/3.171/0.167 ms'
  succeeded: true

So now stackstorm has run the action.

Now create a custom pack folder in /opt/stackstorm/packs and create a folder within that called actions and then create a file called ping.yaml within that. That file should contain:

description: Action that executes the Linux ping command on the localhost.
runner_type: "local-shell-cmd"
enabled: true
entry_point: ''
name: ping
    description: The ip address to ping
    type: string
    required: true
    description: Arbitrary Linux command to be executed on the local host.
    required: true
    type: string
    default: 'ping -c4 {{ip}}'
    immutable: true
    immutable: true
    default: false
    immutable: true
    immutable: true


We are running a local shell command, I'm not 100% on the other paramters and if they are even needed but the cmd command is and is defaulted to ping -c4 {{ ip}} where we intepolate ip.

Now we reload stackstorm to pickup the action: st2ctl reload

Then we run the action: st2 action run ip=


Create the Action Alias

Now we are going to create the alias so that the ping can be called from slack.

In /opt/stackstorm/packs/my_pack/aliases/ping.yaml add:


name: "ping"
pack: "my_pack"
action_ref: ""
description: "Execute a local ping."
  - "ping {{ ip }}"

Now you need to reload stackstorm: sudo sysctl reload.
The action should now be available on slack (if you have set chatops up).

Next thing, is the alias will not show up in help if you have not restarted the chatops service, so let us do that now: sudo systemctl restart chatops

when you do !help your alias will now be there:


So let's run it (remember you can also @botname to run the command): @mybot ping


So that is a good demo.

Configure Kafka and Stackstorm config

So you have installed stackstorm and the kafka pack, but now we need to configure the pack to consume and produce messages on kafka.


The first thing that you should check out is the stackstorm documentation on how packs are configured.

After you have read that you will know that in the pack's repo there is a schema file describing how the configuration should look like called: config.schema.yaml

The kafka pack repo also has an example of the configuration that looks like this: kafka.yaml.example

The example config file looks like this:

# Used by the produce action
client_id: st2-kafka-producer
# Used for the kafka sensor
  client_id: testclientID
  group_id: testgroupID
  - test
# Used by the gcp kafka sensor
  client_id: st2gcpclient
  group_id: st2gcpgroup
- gcp_stackdriver_topic

Now put this file in: /opt/stackstorm/configs

Call it: kafka.yaml

No Auth?

I'm a bit staggered that there is no authentication fields. I suppose I will find out a bit later.

Now let us test if we are getting an messages.

Configuration files are not read dynamically at run-time. Instead, they must be registered, and values are then loaded into the StackStorm DB.

You can register the new config wth:

st2ctl reload --register-configs


Remember static values you don't want stored in the database can be referenced with:

api_secret: "{{st2kv.user.api_secret}}"

and stored with:

st2 key set private_key_path "/home/myuser/.ssh/my_private_rsa_key"

Lets test it

To test producing a message use:

st2 run kafka.produce topic=test message='StackStorm meets Apache Kafka'

For me, it failed with:

id: 5cd3cffe9dc6d63a924c0635
status: failed
  message: StackStorm meets Apache Kafka
  topic: test
  exit_code: 1
  result: None
  stderr: "No handlers could be found for logger "kafka.conn"

So there was no handler for the logger, which I assumed to be some issue where no logger was set due to the mode. I switched to debug mode on. That didn't work...

So I ended up editing the file add adding logging to the stackstorm-kafka pack as per this stackoverflow answer.

I added this to the top of /opt/stackstorm/packs/kafka/actions/


and saw that it was querying for brokers and topics and getting the response, however the host part of the response was kafka and not an ip address so it didn't know where to go.
So the kafka server need to be configured so broker configs have the following in:

advertised.listeners = ''

So I fixed it in the interrum by addding to /etc/hosts: kafka

Still an error

The kafka pack is old and still gave an error, when returning the response, so the action failed (yet the message was sent to kafka).

 Testing receiving Triggers

Lets test that stackstorm can receive messages from kafka, run criteria on them and send another message out.
How about we say hello and have stackstorm say hello back to us.

We have the trigger and action parts we just need to bind them together using a stackstorm rule.

List available triggers:

st2 trigger list

Get info about the trigger we want to use and take note of the payload parameters:


st2 trigger get kafka.new_message

The rule I created:

[cent@st2 rules]$ cat repeat_message.yaml 

name: "repeat_kafka_messages"
pack: "custom"
description: "Repeat the previous message received"
enabled: true

        type: "kafka.new_message"

    criteria:                              # optional
            type: "equals"
            pattern : "test"
            type: "iequals"
            pattern : "hello"

    action:                                # required
        ref: "kafka.produce"
            topic: "test"                        # optional
            message: "topic: {{ trigger.topic }}\nmessage: {{ trigger.message }}\nparition: {{ trigger.partition }}\nkey: {{ trigger.key }}\noffset:{{ trigger.offset }}"

Now to create the rule:

st2 rule create /usr/share/doc/st2/examples/rules/sample_rule_with_webhook.yaml

then reload the rules:

st2ctl reload --register-rules

I tested the rule and it worked well, but it wasn't actually sensing the message when added to the topic. Now the struggle of finding out why...


Traces track all triggers, actions and rules...this will help me debug.

If I check the traces with st2 trace list, there are no trigger instances in the last hour, even though I just send some messages into the test topic.


I got more lucky looking through the source code and finding a log:

self._logger.debug('[KafkaMessageSensor]: Initializing consumer ...')

but it is logging on the DEBUG log level, but the default level is INFO in /etc/st2/logging.sensorcontainer.conf.

So I changed it to DEBUG:

handlers=consoleHandler, fileHandler, auditHandler



then I saw in the logs that the payload validation was failing:

2019-05-10 05:42:03,589 140451038122704 WARNING trigger_dispatcher [-] Failed to validate payload ({'topic': 'test', 'message': 'hello', 'partition': 0, 'key': None, 'offset': 52}) for trigger "kafka.new_message": 'hello' is not of type u'object', 'null'

Failed validating u'type' in schema[u'properties'][u'message']:
    {u'description': u'Captured message. JSON-serialized messages are automatically parsed',
     u'type': [u'object', 'null']}

On instance[u'message']:
2019-05-10 05:42:03,590 140451038122704 WARNING trigger_dispatcher [-] Trigger payload validation failed and validation is enabled, not dispatching a trigger "kafka.new_message" ({'topic': 'test', 'message': 'hello', 'partition': 0, 'key': None, 'offset': 52}): 'hello' is not of type u'object', 'null'

Failed validating u'type' in schema[u'properties'][u'message']:
    {u'description': u'Captured message. JSON-serialized messages are automatically parsed',
     u'type': [u'object', 'null']}

On instance[u'message']:

which I raised on their github.