Devstack fixes; configurable API address for VMs; documentation refresh.

Change-Id: I1438d8c954f76f15afae33c92473b846d40ebe3d
Signed-off-by: Pino de Candia <giuseppe.decandia@gmail.com>
This commit is contained in:
Pino de Candia 2018-03-09 10:47:29 -06:00 committed by Pino de Candia
parent fe3e41f34e
commit 3a5a9fbe03
11 changed files with 464 additions and 85 deletions

167
INSTALLATION.rst Normal file
View File

@ -0,0 +1,167 @@
===============
Installing Tatu
===============
Devstack
--------
So far (March 2018) I've been developing Tatu on my devstack instance. The
devstack plugin is mostly working. See the README under tatu/devstack.
Installation Tools
------------------
No work has been done to automate Tatu installation for production. We plan
to provide Ansible and Kolla installers, but this is just a vague intent at the
moment (March 2018).
Manual Installation
-------------------
A good guide to manual installation is to look at Tatu's devstack plugin (in
tatu/devstack/plugin.sh). This document's steps may become stale, but the
steps are given in more detail and with some motivation.
Installing Tatus daemons
There are 2 daemons: API daemon and Notifications daemon.
Get the code
On your controller node, in a development directory:
# git clone https://github.com/openstack/tatu
# cd tatu
# python setup.py develop
Modify Tatus cloud-init script
WARNING: user-cloud-config has only been tested on Fedora-Cloud-Base-25-1.3.x86_64
tatu/files/user-cloud-config is a cloud-init script that needs to run once on every VM.
It extracts Tatus dynamic vendor data from ConfigDrive;
Finds the one-time-token and uses it in the call to Tatu /noauth/hostcerts API;
Does the user account and SSH configuration;
Finally, sets up a cron job to periodically refresh the revoked-keys file from Tatu.
If youre using my branch of Dragonflow (https://github.com/pinodeca/dragonflow/tree/tatu) then a VM can reach the Tatu API at http://169.254.169.254/noauth via the Metadata Proxy. However, if youre using any other Neutron driver, youll need to modify the cloud-init script. Replace:
url=http://169.254.169.254/….
in tatu/files/user-cloud-config in 2 places, with:
url=http://<Tatu APIs VM-accessible address>/….
And make sure any VMs you deploy are in Tenants and Networks that have SNAT enabled (or give every VM a FloatingIP).
Prepare the cloud-init script as static vendor data...
How does Tatus cloud-init script get into the VMs you deploy? There are two ways.
The first and recommended way (and what I did in the video demo) is to use static vendor data. First, convert the (possibly modified) cloud-init to vendor-data by running the following command from the tatu directory:
# scripts/cloud-config-to-vendor-data files/user-cloud-config > /etc/nova/tatu_static_vd.json
And now modify /etc/nova/nova-cpu.conf as follows:
[api]
vendordata_providers = StaticJSON,DynamicJSON
vendordata_jsonfile_path = /etc/nova/tatu_static_vd.json
...or pass it as user-data for each VM launch
The second/alternative way to get the cloud-init script into your VM is to pass it as user-data at launch time. The Horizon instance launch panel has a tab with a text field to paste a cloud-init user data script. Users will have to paste Tatus user-cloud-config script at every launch. Obviously, this isnt as good a user experience.
Configure dynamic vendor data
In order to configure SSH, Tatus cloud-init script needs some data unique to each VM:
A one-time-token generated by Tatu for the specific VM
The list of user accounts to configure (based on Keystone roles in the VMs project)
The list of user accounts that need sudo access.
As well as some data thats common to VMs in the project:
The projects public key for validating User SSH certificates.
A non-standard SSH port.
All this information is passed to the VM as follows:
At launch time, Nova Compute securely calls Tatus dynamic vendordata API.
Nova writes the vendordata to ConfigDrive
Note: to protect the one-time-token and the user account names, its best not to expose thiis information via the metadata API.
To enable ConfigDrive, add this to /etc/nova/nova-cpu.conf:
[DEFAULT]
force_config_drive=True
TODO: disable Tatu vendor data availability via MetaData API. May require Nova changes.
To get Nova Compute talking to Tatu, add this to /etc/nova/nova-cpu.conf:
[api]
vendordata_providers = StaticJSON, DynamicJSON
vendordata_dynamic_targets = 'tatu@http://127.0.0.1:18322/novavendordata'
vendordata_dynamic_connect_timeout = 5
vendordata_dynamic_read_timeout = 30
[vendordata_dynamic_auth]
auth_url = http://127.0.0.1/identity
auth_type = password
username = admin
password = pinot
project_id = 2e6c998ad16f4045821304470a57d160
user_domain_name = default
Of course, modify the IP addresses, project ID, username and password as appropriate.
Prepare /etc/tatu/tatu.conf
# cd tatu
# mkdir /etc/tatu
# cp files/tatu.conf /etc/tatu/
Editing /etc/tatu/tatu.conf:
use_pat_bastions = False
sqlalchemy_engine = <URI for your database, e.g. mysql+pymysql://root:pinot@127.0.0.1/tatu>
auth_url = <location of identity API>
user_id = <ID of the Admin user>
Launch Tatus notification daemon
Tatus notification daemon only needs tatu.conf, so we can launch it now.
Tatu listens on topic “tatu_notifications” for:
Project creation and deletion events from Keystone.
To create new CA key pairs or clean up unused ones.
Role assignment deletion events from Keystone.
To revoke user SSH certificates that are too permissive.
VM deletion events from Nova.
To clean up per-VM bastion and DNS state.
Edit both /etc/keystone/keystone.conf and /etc/nova/nova.conf as follows:
[oslo_messaging_notifications]
topics = notifications,tatu_notifications
Now launch Tatus notification listener daemon:
# python tatu/notifications.py
At first launch you should see debug messages indicating that CA key pairs are being created for all existing projects.
Prepare /etc/tatu/paste.ini
# cd tatu
# mkdir /etc/tatu
# cp files/paste.ini /etc/tatu/
paste.ini should only need these modifications:
Host (address the daemon will listen on)
Port (port the daemon will listen on)
admin_token - run "openstack token issue" and put the resulting token ID here. TOKEN=$(openstack token issue -f yaml -c id | awk '{print $2}')
Launch Tatus API daemon
Tatus API daemon needs both tatu.conf and paste.ini. We can launch it now.
I have done all my testing with Pylons (no good reason, Im new to wsgi frameworks):
# pip install pylons
# pserve files/paste.ini
Note the API serves /noauth/hostcerts and /noauth/revokeduserkeys without authorization (so that newly bootstrapped servers can access get their certificates and the list of revoked keys).
Register Tatu API in Keystone
# openstack endpoint create --region RegionOne ssh public http://147.75.72.229:18322/
# openstack service create --name tatu --description "OpenStack SSH Management" ssh
Thanks to this registration, neither the dashboard nor CLI need configuration to find Tatu.
Installing tatu-dashboard
(Wherever horizon is installed)
git clone https://github.com/openstack/tatu-dashboard
python setup.py develop
Copy (or soft link) files from tatu-dashboard/tatudashboard/enabled to horizon/openstack_dashboard/local/enabled/
From horizon directory, run python manage.py compress
service apache2 restart
Installing python-tatuclient
(On any host where you want to run "openstack ssh)
git clone https://github.com/pinodeca/python-tatuclient
python setup.py develop

View File

@ -2,115 +2,238 @@
Tatu - OpenStack's SSH-as-a-Service
===================================
Named in honor of Tatu Ylönen, the inventor of SSH, Tatu is an OpenStack service that manages user and host certificates. Tatu can also start and manage bastion servers so that you don't have to (and you don't have to give every SSH server a public IP address).
Named in honor of Tatu Ylönen, the inventor of SSH, Tatu is an OpenStack
service that manages user and host SSH certificates. Tatu can also start and
manage bastion servers so that you don't have to (and so you don't have to give
every SSH server a public IP address).
Tatu uses Barbican to store two private keys per OpenStack project:
* a User CA (Certificate Authority) key is used to sign a user public key, thus creating an SSH client's user certificate.
* a Host CA key is used to sign the SSH server's public key, thus creating an SSH host certificate.
* A User CA (Certificate Authority) key is used to sign a user public key, thus
creating an SSH user certificate.
* A Host CA key is used to sign the SSH server's public key, thus creating an
SSH host certificate.
Tatu provides APIs that allows:
Tatu provides APIs that allow:
* OpenStack users to obtain a user SSH certificate (per project) for their public key, with permissions corresponding to their roles in the project)
* OpenStack VM (or bare metal) instances to obtain a host SSH certificate for their public key.
* OpenStack users to obtain SSH certificates (project-scoped) for public keys
of their choosing, with permissions corresponding to their roles in the
project). The SSH certificate is usually placed in ~/.ssh/id_rsa-cert.pub
* OpenStack users to obtain the public key of the CA that signs host
certificates. This is placed in the user's known_hosts file.
* OpenStack VM (or bare metal) instances to obtain a host SSH certificate for
their public key, and to learn the public key of the CA for users.
During VM provisioning:
* Tatu's cloud-init script is passed to the VM via Nova static vendor data.
* VM-specific configuration is placed in the VM's ConfigDrive thanks to Nova's
**dynamic** vendor data call to Tatu API.
* The cloud-init script consumes the dynamic vendor data:
** A one-time-token is used to authenticate the VM's request to Tatu API to
sign the VM's public key (and return and SSH host certificate).
** A list of the VM's project's Keystone roles is used to create user accounts
on the VM.
** A list of sudoers is used to decide which users get password-less sudo
privileges. The current policy is that any Keystone role containing "admin"
should correspond to a user account with sudo privileges.
** The public key of the CA for User SSH certificates is retrieved, and along
with the requested SSH Host Certificate, is used to (re)configure SSH.
* A cron job is configured for the VM to periodically poll Tatu for the revoked
keys list.
During negotiation of the SSH connection:
#. The server presents its host certificate.
#. The client checks the validity of the host certificate using a Host CA public key configured in its known_hosts file (config line starts with @cert-authority <domain>).
#. The client presents its client certificate.
#. The server checks the validity of the client certifiate using a User CA public key configured in sshd_config (TrustedUserCAKeys). The server also checks that the certificate has not been revoked (RevokedKeys in sshd_config).
#. The client certificate also contains a list of SSH principals, some of which the sshd_config may recognize as mapped to specific Linux accounts on the server (AuthorizedPrincipalsFile in sshd_config). The client is only allowed to login to those Linux accounts.
#. The server presents its SSH host certificate.
#. The client checks the validity of the host certificate, by checking its
signature with the Host CA public key stored in the known_hosts file
(in a config line that starts with @cert-authority <domain>).
#. The client presents its SSH client certificate.
#. The server checks the validity of the client certificate, by checking its
signature with the User CA public key stored in the file configured in
sshd_config's TrustedUserCAKeys.
#. The server also checks that the certificate has not been revoked, for
example that its serial number isn't in the file configured in ssh_config's
RevokedKeys setting.
#. The client certificate also contains a list of principals that in Tatu's
case correspond to the user's role assignments in the project and give
access to user accounts with the same name.
Use of host certificates prevents MITM (man in the middle) attacks. Without host certificates, users of SSH client software are presented with a message like this one when they first connect to an SSH server:
Use of host certificates prevents MITM (man in the middle) attacks. Without
host certificates, users of SSH client software are presented with a message
like this one when they first connect to an SSH server:
| The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
| ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
| Are you sure you want to continue connecting (yes/no)? yes
| Are you sure you want to continue connecting (yes/no)?
SSH servers only need to store the User CA public key (and revoked client certificates), not every client certificate. This is simpler, more secure and more manageable than today's common practice: putting the client public key in the SSH server's authorized_keys file.
There's no way to verify the fingerprint unless there's some other way of
logging into the VM (e.g. novnc with password - whhich is not recommended).
API
---
It should be obvious that using certificates SSH servers only need to store the
user CA public key (and a digest of revoked client certificates), not every
client certificate. This is simpler, more secure and more manageable than
today's common practice: putting each user's public key in the SSH server's
authorized_keys file.
Tatu's APIs support:
Installation
------------
* Creation of a user SSH certificate based on a Keystone User and:
Please see the INSTALLATION document in this repository.
* A KeyPair from the Compute API;
* Or a public SSH key.
APIs, Horizon Panels, and OpenStack CLIs
----------------------------------------
* Creation of new SSH private key, public key, and user certificate based on a Keystone User.
* Revocation of user certificates.
* Reading one or many user certificates issued for a project.
* Reading one or many revoked user certificates for a project.
* Creation of a host SSH certificate and authorized principals files based on a Project and its Roles.
* Reading Tatu's Bastion CA public key. Bastions present host SSH certificates signed by this CA, so users configure their SSH clients to trust Tatu's bastions by adding this public key to their known_hosts file.
Tatu provides REST APIs, Horizon Panels and OpenStack CLIs to:
* Retrieve the public keys of the user and host CAs for each OpenStack project.
See ssh ca --help
* Create (and revoke) SSH user certificates with principals corresponding to
the OpenStack user's role assignments. See ssh usercert --help
* Create and view SSH host certificates. See ssh hostcert --help
* Get the bastion addresses for each SSH server and their DNS records. See
ssh host --help
VM access to Tatu's API
-----------------------
Tatu does not currently generate SSH keys for VMs (although we may consider
this feature later since Barbican may be able to generate better quality
keys).
On first boot, the VM calls Tatu's */hostcerts* API to request a
host certificate. It passes as parameters the SSH public key (currently the RSA
key) and a one-time-token. The one-time token was previously generated by Tatu
on a request by Nova for dynamic vendor data, and then passed to the VM via
ConfigDrive.
The VM also periodically (every 60 seconds) calls Tatu's */revokeduserkeys* API
to refresh its local revoked-keys file (configured via RevokedKeys in
sshd_config).
The VM's access to the Tatu API must currently go over http (not https) and
cannot be authenticated via Keystone. We aim to improve this in the future. We
therefore expose the /hostcerts and /revokeduserkeys APIs without
authentication (with a /noauth path prefix). The one-time-token prevents
malicious users from generating host certificates. The /hosttokens API to
generate one-time-tokens is only accessible with Keystone authentication, can
be secured with TLS, and is only meant to be called by Nova's dynamic vendor
data mechanism.
In order to further secure Tatu's /noauth path, we intend to have VMs access
Tatu's API via the Metadata Proxy. We have an experimental implementation with
the Dragonflow Neutron plugin. In this case the VMs access the API at
169.254.169.254:80 and the Metadata Proxy distinguishes Tatu calls from Nova
metadata calls and proxies them to Tatu instead of Nova. In support of this
feature, Tatu's configuration has an api_endpoint_for_vms parameter in support
of this feature. The VM learns what IP address to use via Tatu's dynamic vendor
data.
Scope of user and host SSH certificates
---------------------------------------
User certificates are generated with a per-project User CA. Host certificates are generated with a per-project Host CA; and SSH servers have their TrustedUserCAKeys point to a file containing the public key of their project's User CA.
User certificates are generated with a per-project User CA. Host certificates
are generated with a per-project Host CA.
Therefore, a User will require multiple certificates (one per project) to SSH to servers in multiple projects (even in the same domain).
An OpenStack user wishing to ssh into VMs belonging to different projects will
require one certificate per project.
In the future we will consider using per-domain User and Host CAs.
Principals and Linux accounts
-----------------------------
When the user SSH certificate is created for a Keystone User, its list of principals is determined as follows:
When a user SSH certificate is created for a given project, the list of
principals is equal to the user's role assignments in Keystone. If any of the
user's role assignments are deleted, Tatu automatically revokes any of the
user's certificates whose principal lists contain that role name.
* If any of the User's Roles have a name containing "admin" (regardless of capitalization), add a principal with name "ProjectAdmin".
* Add a principal whose name is the User identity.
When a Linux VM is launched, Tatu sets up a user account for each of the roles
in the project at that time. As of March 2018, there is no support for sync-ing
the Linux user accounts in the VM with the project's roles if they change after
VM launch.
Tatu installs a file named "root" at the path indicated by AuthorizedPrincipalsFile entry in sshd_config. The file contains two lines:
Tatu leaves root and non-root default users (e.g. fedora use on fedora
VMs) intact, including any authorized_keys files. As a result, OpenStack
KeyPairs continue to work as designed, which is useful for debugging Tatu or
having a fallback method to access the VMs.
| ProjectAdmin
| <Identity of instance owner>
Tatu's policy is that any role containing the word "admin" results in a user
account with passwordless sudo privileges. Thanks to the uber/pam-ussh
integration (not yet merged as of March 9, 2018) sudo privilege is revoked as
soon as the VM learns that the user's certificate has been revoked. However,
uber/pam-ussh requires the client to run ssh-agent and ssh-add their
certificate.
Note that on platforms that have a non-root default user (usually with sudo privilege), the file will be named as that user. For example, on Ubuntu 16.04, the file will be named "ubuntu".
As a result, the following Users are able to login to an instance as root (or as the default user):
* The instance owner;
* Any User that has an "admin" Role (in the Domain).
In the future we will support non-root access and giving specific roles SSH access to specific sets of instances.
Note that because of this policy, an OpenStack user may not have sudo
privileges on VMs she herself launched.
Bastion Management
------------------
Tatu automatically runs an SSH bastion for each OpenStack project. Each bastion consumes one Neutron port on the public network and therefore one public IPv4 and IPv6 address. If Designate is enabled, Tatu inserts an A record and an AAAA record with name "tatu-bastion.<project-name>.<domain>" and the appropriate IPv4/v6 public address.
Tatu aims to manage SSH bastions for OpenStack environments. This feature
would provide the following benefits:
* reduce operational burden for users that already manage bastions themselves.
* avoid assigning Floating IP addresses to VMs for sole purpose of SSH access.
* provide a single point of security policy enforcement, and especially one
that is harder to tamper with. A user with access to an account with sudo
privileges on a VM may be able to tamper with the VM's security but not with
the bastion's. This can significantly increase security if all SSH access
is required to go through bastions.
A bastion has an interface on each of the Project's Neutron Networks. Therefore Tatu consumes one port on every Neutron network. This, combined with the bastion's interface on the public network, allows users to SSH to instances even when their Networks are not publicly routable.
As of March 2018, Tatu **does not** yet support general bastion management.
Assuming the SSH client's known_hosts file has been configured with two @cert-authority lines, one containing the Bastion CA public key, the other containing the Project Host CA public key, a user can SSH to her instance as follows:
| ssh -o ProxyCommand="ssh -W %h:%p <bastion IP or DNS name" <account-name>@<instance IP or hostname>
For example:
| ssh -o ProxyCommand="ssh -W %h:%p 10.99.157.129" ubuntu@10.0.0.13
Or (for OpenSSH 7.3 and later):
| ssh -o ProxyJump="10.99.157.129" ubuntu@10.0.0.13
Note that one of the user SSH certificate's principals must be mapped to an account on the bastion (or the bastion will reject the SSH connection). Tatu configures the bastion (e.g. on Ubuntu 16.04) AuthorizedPrincipalFile with a single file named 'nobody' which contains the names of all principals. This allows the SSH client to use the bastion as a jump host but not to login there; this secures the bastion itself. The ssh command is therefore:
| ssh -o ProxyJump="nobody@10.99.157.129" ubuntu@10.0.0.13
However, Tatu has an experimental feature (off by default) to provide ssh
access to VMs via PAT (port address translation). PAT provides only some of the
previously mentioned benefits of bastions: it avoids assigning a FloatingIP
per VM, but it does not provide a single point of policy enforcement because
PAT always translates and forwards without checking certificates as a full SSH
proxy would. **PAT bastions are only supported by an experimental version
of Dragonflow Neutron plugin.** It works as follows:
* At setup time, Tatu reserves a configurable number of ports in the Public
network. Their IP addresses are used for PAT. Dragonflow randomly assigns
each PAT addresses to a different compute node. That compute node then acts
as a "pat-bastion".
* Tatu also sets up DNS A records for each pat-bastion in OpenStack Designate.
For example, if the bastion's address is 172.24.4.9, then the A record's URL
will be "bastion-172-24-4-9.<configurable-domain>."
* When a VM is launched Tatu reserves a unique port on each of a configurable
number of pat-bastions and sets up Dragonflow PAT entries so that each
translates to the VM's private address and port 22 (or a configurable port).
* The user can learn what pat-bastion:port pairs have been assigned to a VM by
using Tatu's *ssh host* CLI or "Compute->SSH->Hosts" panel in Horizon. At
this point the user can already SSH to the pat-bastion's IP using ssh's -p
option to pass the unique port. Dragonflow will take care of receiving the
traffic at the compute node that owns that PAT address, and translating
and forwarding the packets to the VM's private IP. If the compute node fails,
Tatu will eventually re-assign the PAT address to a different compute. In the
meantime, if we configured num_pat_bastions_per_server > 1, then the user
can ssh to the same VM via an alternative pat-bastion:port pair.
* At VM launch time, Tatu also sets up a DNS SRV record for each
pat-bastion:port pair assigned to the VM. For example, if the VM has been
assigned 172.24.4.9:1000, then the SRV record's URL will be
"_ssh._tcp.<hostname>.<project_name>.<configurable-domain>." and will point
to port 1000 on the A record with URL
"bastion-172-24-4-9.<configurable-domain>." These SRV records provide an
alternative way for the user to discover the pat-bastion:port pairs assigned
to the VM. Tatu also provides an ssh wrapper script (under
tatu/scripts/srvssh) that does an SRV lookup in DNS, and then calls ssh
with the -p option.
Future Work
-----------
* The option to delegate certificate generation to a 3rd party, so that Tatu does not need access to your project's CA private keys.
* Support OCSP (Online Certificate Status Protocol) as an alternative to using Certificate Revocation Lists.
* The option to delegate certificate generation to a 3rd party, so that Tatu
does not need access to your project's CA private keys.
* Support OCSP (Online Certificate Status Protocol) as an alternative to using
Certificate Revocation Lists.
* Automate periodic User and Host CA key rotation.
* APIs to control the mapping of Keystone roles to Linux accounts (including ones configured via cloud-init).
* APIs to control the mapping of Keystone roles to Linux accounts (including
ones configured via cloud-init).
* APIs to control finer-grained SSH access per project.
* Allow the option of enabling the Bastion per Neutron Network - allow avoiding consuming the extra port.
* Per-domain User and Host CAs.
* Full bastion support (as opposed to PAT bastions).
* Per-domain User and Host CAs (e.g. shared across projects in a domain).
Automated user key rotation is not required because the API already allows generating new user certificates on demand.
Automated server key rotation is not required because the API already allows generating new host certificates on demand. Yearly Host CA key rotation should make server key rotation redundant.
Automated user key rotation is not required because the API already allows
generating new user certificates on demand.
Is automated server key rotation useful? Would yearly Host CA key rotation
make server key rotation redundant?

View File

@ -18,3 +18,8 @@ repository. See contrib/vagrant to create a vagrant VM.
enable_plugin tatu https://github.com/openstack/tatu
3. run ``stack.sh``
Note that Tatu requires Barbican (and optionally Designate and Dragonflow
if you're using the PAT bastions experimental feature).
See the local.conf and local-df.conf examples in this directory.

View File

@ -44,14 +44,15 @@ function configure_tatu {
iniset $TATU_CONF tatu user_id $admin_user
iniset $TATU_CONF tatu password $ADMIN_PASSWORD
iniset $TATU_CONF tatu project_id $admin_project
iniset $TATU_CONF tatu use_barbican_key_manager True
iniset $TATU_CONF tatu use_pat_bastions False
iniset $TATU_CONF tatu use_barbican $TATU_USE_BARBICAN
iniset $TATU_CONF tatu use_pat_bastions $TATU_USE_PAT_BASTIONS
iniset $TATU_CONF tatu ssh_port 2222
iniset $TATU_CONF tatu num_total_pats 1
iniset $TATU_CONF tatu num_pat_bastions_per_server 1
iniset $TATU_CONF tatu pat_dns_zone_name tatuDemo.com.
iniset $TATU_CONF tatu pat_dns_zone_email my@tatu.devstack
iniset $TATU_CONF tatu num_total_pats $TATU_NUM_TOTAL_PATS
iniset $TATU_CONF tatu num_pat_bastions_per_server $TATU_PAT_BASTIONS_PER_SERVER
iniset $TATU_CONF tatu pat_dns_zone_name $TATU_DNS_ZONE_NAME
iniset $TATU_CONF tatu pat_dns_zone_email $TATU_DNS_ZONE_EMAIL
iniset $TATU_CONF tatu sqlalchemy_engine `database_connection_url tatu`
iniset $TATU_CONF tatu api_endpoint_for_vms $TATU_API_FOR_VMS
# Need Keystone and Nova notifications
iniset $KEYSTONE_CONF oslo_messaging_notifications topics notifications,tatu_notifications

View File

@ -4,11 +4,11 @@ plugin_requires tatu barbican
# Default options
TATU_USE_BARBICAN=${TATU_USE_BARBICAN:-"True"}
TATU_USE_PAT_BASTIONS=${TATU_USE_PAT_BASTIONS:-"True"}
TATU_TOTAL_PAT_BASTIONS=${TATU_TOTAL_PAT_BASTIONS:-2}
TATU_PAT_BASTIONS_PER_INSTANCE=${TATU_PAT_BASTIONS_PER_INSTANCE:-2}
TATU_DNS_ZONE_NAME=${TATU_DNS_ZONE_NAME:-example.com.}
TATU_DNS_ZONE_EMAIL=${TATU_DNS_ZONE_EMAIL:-"admin@admin"}
TATU_USE_PAT_BASTIONS=${TATU_USE_PAT_BASTIONS:-"False"}
TATU_NUM_TOTAL_PATS=${TATU_NUM_TOTAL_PATS:-1}
TATU_PAT_BASTIONS_PER_SERVER=${TATU_PAT_BASTIONS_PER_SERVER:-1}
TATU_DNS_ZONE_NAME=${TATU_DNS_ZONE_NAME:-tatupat.com.}
TATU_DNS_ZONE_EMAIL=${TATU_DNS_ZONE_EMAIL:-"nono@tatupat"}
# Public facing bits
if is_service_enabled tls-proxy; then
@ -21,6 +21,15 @@ TATU_SERVICE_HOST=${TATU_SERVICE_HOST:-$SERVICE_HOST}
TATU_SERVICE_PORT=${TATU_SERVICE_PORT:-18322}
TATU_SERVICE_PORT_INT=${TATU_SERVICE_PORT_INT:-28322}
# VMs in Devstack can communicate with Tatu via the public network gateway and
# never over the TLS proxy. If the setup has a modified version of Neutron's
# MetadataProxy, override TATU_API_FOR_VMS with url containing 169.254.169.254.
if is_service_enabled tls-proxy; then
TATU_API_FOR_VMS=${TATU_API_FOR_VMS:-$SERVICE_PROTOCOL://$PUBLIC_NETWORK_GATEWAY:$TATU_SERVICE_PORT_INT}
else
TATU_API_FOR_VMS=${TATU_API_FOR_VMS:-$SERVICE_PROTOCOL://$PUBLIC_NETWORK_GATEWAY:$TATU_SERVICE_PORT}
fi
# Default directories
TATU_BIN_DIR=$(get_python_exec_prefix)
TATU_DIR=$DEST/tatu

39
doc/source/ca_domain.rst Normal file
View File

@ -0,0 +1,39 @@
===================================================
Note on configuring CA domains in known_hosts file.
===================================================
As of March 2018, Tatu requires writing the Host CA public key (of each project
whose VMs' host certificates you want to trust) in the known_hosts file as:
| @cert-authority * <ca-public-key>
The '*' represents the SSH hosts' hostname domain for which the client wants
to trust the CA.
Note also that Tatu currently generates host certificates with Key ID set to
host's name e.g. "berry" (without FQDN, like "berry.<project>.<domain>").
The hostname is passed with the -I option to the call to ssh-keygen -h -s...
to generate the host certificate.
We could tighten up the @cert-authority line like this:
| @cert-authority *.demo.ssh.pino.com <ca-public-key>
by passing the hosts's fully qualified name to ssh-keygen. However, the ssh
client would only accept the host certificate if the ssh command was launched
with the SSH server's fully qualified name (as opposed to IP address). In other
words, this would work:
| ssh <account-name>@berry.demo.ssh.pino.com
while this would not (the client would reject the certificate):
| ssh <account-name>@<ip address>
...unless a reverse DNS lookup (PTR record lookup) for that IP address returns
the host's fully qualified name in demo.ssh.pino.com domain. Tatu does not
currently set up DNS PTR records, but this should be possible via Designate.
But keep in mind that the ip addresses might be those of bastions rather than
VMs.
TODO: validate these ideas.

29
doc/source/jump_proxy.rst Normal file
View File

@ -0,0 +1,29 @@
================
Jump Proxy Notes
================
**NOTE: This feature is NOT YET IMPLEMENTED.**
Assuming the SSH client's known_hosts file has been configured with two
@cert-authority lines (one containing the Bastion CA public key, the other
containing the Project Host CA public key), a user can SSH to her instance as
follows:
| ssh -o ProxyCommand="ssh -W %h:%p <bastion IP or DNS name" <account-name>@<instance IP or hostname>
For example:
| ssh -o ProxyCommand="ssh -W %h:%p 10.99.157.129" ubuntu@10.0.0.13
Or (for OpenSSH 7.3 and later):
| ssh -o ProxyJump="10.99.157.129" ubuntu@10.0.0.13
Note that one of the user SSH certificate's principals must be mapped to an
account on the bastion (or the bastion will reject the SSH connection). Tatu
should configure the bastion (e.g. on Ubuntu 16.04) AuthorizedPrincipalFile
with a single file named 'nobody' which contains the names of all principals.
This allows the SSH client to use the bastion as a jump host but not to login
there; this secures the bastion itself. The ssh command is therefore:
| ssh -o ProxyJump="nobody@10.99.157.129" ubuntu@10.0.0.13

View File

@ -1,7 +1,7 @@
[DEFAULT]
[tatu]
use_barbican_key_manager = True
use_barbican = True
#use_pat_bastions = True
ssh_port = 1222
num_total_pats = 1

View File

@ -41,9 +41,9 @@ write_files:
echo host public key is $host_pub_key
data=$(echo {\"token_id\": \"$token\", \"host_id\": \"$host_id\", \"pub_key\": \"$host_pub_key\"})
echo $data > /tmp/tatu_cert_request.json
url=http://169.254.169.254/noauth/hostcerts
echo url=$url
echo Posting Host Certificate request to Tatu API
api=$(echo $vendordata | grep -Po '"api_endpoint": \K[^"]*')
url=$api/noauth/hostcerts
echo Posting Host Certificate request to Tatu API at $url
response=$(curl -s -w "%{http_code}" -d "@/tmp/tatu_cert_request.json" -X POST $url)
code=${response##*\}}
if [ "$code" != "200" ]; then
@ -101,8 +101,10 @@ write_files:
metadata=$(cat /mnt/config/openstack/latest/meta_data.json)
auth_id=$(echo $metadata | grep -Po 'project_id": "\K[^"]*')
echo auth_id=$auth_id
url=http://169.254.169.254/noauth/revokeduserkeys/$auth_id
echo url=$url
vendordata=$(cat /mnt/config/openstack/latest/vendor_data2.json)
api=$(echo $vendordata | grep -Po '"api_endpoint": \K[^"]*')
url=$api/noauth/revokeduserkeys/$auth_id
echo Fetching revoked user keys from Tatu API at $url
response=$(curl -s -w "%{http_code}" $url)
code=${response##*\}}
if [ "$code" != "200" ]; then

View File

@ -300,6 +300,7 @@ class NovaVendorData(object):
'users': ','.join(roles),
'sudoers': ','.join([r for r in roles if "admin" in r]),
'ssh_port': CONF.tatu.ssh_port,
'api_endpoint': CONF.tatu.api_endpoint_for_vms,
}
resp.body = json.dumps(vendordata)
resp.location = '/hosttokens/' + token.token_id

View File

@ -25,7 +25,7 @@ LOG = logging.getLogger(__name__)
# 1) register options; 2) read the config file; 3) use the options
opts = [
cfg.BoolOpt('use_barbican_key_manager', default=False,
cfg.BoolOpt('use_barbican', default=False,
help='Use OpenStack Barbican to store sensitive data'),
cfg.BoolOpt('use_pat_bastions', default=True,
help='Use PAT as a "poor man\'s" approach to bastions'),
@ -56,6 +56,9 @@ opts = [
cfg.StrOpt('project_id',
default='2e6c998ad16f4045821304470a57d160',
help='OpenStack Keystone admin project UUID'),
cfg.StrOpt('api_endpoint_for_vms',
default='http://169.254.169.254',
help='Where a VM accesses the API for SSH certs and revoked keys'),
]
CONF = cfg.ConfigOpts()
@ -71,7 +74,7 @@ logging.setup(CONF, "tatu")
GCONF = cfg.CONF
if CONF.tatu.use_barbican_key_manager:
if CONF.tatu.use_barbican:
LOG.debug("Using Barbican as key manager.")
set_castellan_defaults(GCONF)
else: