Salt REST API¶
Important
In the configuration examples below, for simplicity, I’ve used the auto external authentication, and disabled the SSL for the Salt API. This setup is highly discouraged in production.
Using the Master configuration file under examples/salt_api/master:
/etc/salt/master
:
pillar_roots:
base:
- /srv/salt/pillar
file_roots:
base:
- /srv/salt/extmods
rest_cherrypy:
port: 8080
disable_ssl: true
external_auth:
auto:
'*':
- '@runner'
The pillar_roots
option points to /srv/salt/pillar
, so to be able to
use this example, either create a symlink to the pillar
directory in this
example, or copy the files.
For example, if you just cloned this repository:
$ mkdir -p /srv/salt/pillar
$ git clone git@github.com:mirceaulinic/salt-sproxy.git
$ cp salt-sproxy/examples/salt_api/master /etc/salt/master
$ cp salt-sproxy/examples/salt_api/pillar/*.sls /srv/salt/pillar/
The contents of Pillar files:
/srv/salt/pillar/top.sls
:
base:
mininon1:
- dummy
juniper-router:
- junos
/srv/salt/pillar/dummy.sls
:
proxy:
proxytype: dummy
/srv/salt/pillar/junos.sls
:
proxy:
proxytype: napalm
driver: junos
host: juniper.salt-sproxy.digitalocean.cloud.tesuto.com
username: test
password: t35t1234
Note
The top.sls
, dummy.sls
, and junos.sls
are a combination of the
previous examples, 101 and
napalm,
which is going to allow use to execute against both the dummy device and
a real network device.
In the example Master configuration file above, there’s also a section for the
file_roots
. As documented in The Proxy Runner section of the
documentation, you are going to reference the proxy Runner, e.g.
$ mkdir -p /srv/salt/extmods/_runners
$ cp salt-sproxy/salt_sproxy/_runners/proxy.py /srv/salt/extmods/_runners/
Or symlink:
$ ln -s /path/to/git/clone/salt-sproxy/salt_sproxy /srv/salt/extmods
With the rest_cherrypy
section, the Salt API will be listening to HTTP
requests over port 8080, and SSL being disabled (not recommended in production):
rest_cherrypy:
port: 8080
disable_ssl: true
One another part of the configuration is the external authentication:
external_auth:
auto:
'*':
- '@runner'
This grants access to anyone to execute any Runner (again, don’t do this in production).
With this setup, we can start the Salt Master and the Salt API (running in background):
$ salt-master -d
$ salt-api -d
To verify that the REST API is ready, execute:
$ curl -i localhost:8080
HTTP/1.1 200 OK
Content-Type: application/json
Server: CherryPy/18.1.1
Date: Wed, 05 Jun 2019 07:58:32 GMT
Allow: GET, HEAD, POST
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: GET, POST
Access-Control-Allow-Credentials: true
Vary: Accept-Encoding
Content-Length: 146
{"return": "Welcome", "clients": ["local", "local_async", "local_batch", "local_subset", "runner", "runner_async", "ssh", "wheel", "wheel_async"]}
Now we can go ahead and execute the CLI command from example 101, by making an HTTP request:
$ curl -sS localhost:8080/run -H 'Accept: application/x-yaml' \
-d eauth='auto' \
-d username='mircea' \
-d password='pass' \
-d client='runner' \
-d fun='proxy.execute' \
-d tgt='minion1' \
-d function='test.ping' \
-d sync=True
return:
- minion1: true
Notice that eauth
field in this case is auto
as this is what we’ve
configured in the external_auth
on the Master.
Similarly, you can now execute the Salt functions from the NAPALM example, against a network device, by making an HTTP request:
$ curl -sS localhost:8080/run -H 'Accept: application/x-yaml' \
-d eauth='auto' \
-d username='mircea' \
-d password='pass' \
-d client='runner' \
-d fun='proxy.execute' \
-d tgt='juniper-router' \
-d function='net.arp' \
-d sync=True
return:
- juniper-router:
comment: ''
out:
- age: 891.0
interface: fxp0.0
ip: 10.96.0.1
mac: 92:99:00:0A:00:00
- age: 1001.0
interface: fxp0.0
ip: 10.96.0.13
mac: 92:99:00:0A:00:00
- age: 902.0
interface: em1.0
ip: 128.0.0.16
mac: 02:42:AC:12:00:02
result: true