Import the OSSG secure development guidance

This introduces a number of secure development practices that were
written by the OSSG during the mid-cycle meetup of 2015.

Credit goes to the original authors and editors of this content:

    Dave Belcher (HP)
    Eric Brown (VMWare)
    Doug Chivers (HP)
    Rob Clarke (HP)
    Nathaniel Dillon (HP)
    Lucas Fisher (Was Nebula)
    Jamie Finnigan (HP)
    Rob Fletcher (Uber)
    Tim Kelsey (HP)
    Brant Knudson (IBM)
    Paul McMillan (Was Nebula)
    Travis McPeak (HP)
    Grant Murphy (HP)

The following is an attempt to track the origins on this content as it
was developed. It is not a complete version history but indicates the
initial import of the content into respective repositories prior to
ending up here. Hopefully this will be enough to track the changes
and contributions made by members of the OpenStack community.

{
    "origins": [
        {
            "author": "Travis McPeak (HP)",
            "file": "doc/source/guidelines/dg_apply-restrictive-file-permissions.rst",
            "history": [
                {
                    "commit": "d5f6ad970ec24564cfaffc13dba26b0006841d5b",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "c7d0e14e9a71fa9bbe301aa76a8adcc55ade52de",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Lucas Fisher (Nebula)",
            "file": "doc/source/guidelines/dg_avoid-dangerous-input-parsing-libraries.rst",
            "history": [
                {
                    "commit": "9f12b0bacc7bf207df5615dce39940aaede66e17",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "9b4181fe3c2e5451ebf65b99b05fe169b6eda63c",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Paul McMillan (Nebula)",
            "comment": "The file was not renamed correctly!",
            "file": "doc/source/guidelines/dg_avoid-shell-true.rst",
            "history": [
                {
                    "commit": "ef6338caa2a6f5cfb67dd2d77a38574d45ae5d2c",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "c7ade476015c1b0fbb1b100b990a10e8373027c2",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Grant Murphy (HP)",
            "file": "doc/source/guidelines/dg_avoid-unvalidated-redirects.rst",
            "history": [
                {
                    "commit": "2f0c954bd4f25a06b8a1439739d494c3a9d0c2de",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "8fb7804da189c95e9bcd2b19b516077b8fb6a44a",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Dave Belcher (HP)",
            "file": "doc/source/guidelines/dg_cross-site-request-forgery-csrf.rst",
            "history": [
                {
                    "commit": "34146ded7f5c8ca3b0be29c17b30fb887a47b955",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "688a73070e52a693f1fc7b827cfbfe1d03bd7f9c",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Dave Belcher (HP)",
            "file": "doc/source/guidelines/dg_cross-site-scripting-xss.rst",
            "history": [
                {
                    "commit": "f4261162a2b07df282fbfab68245b23c39f46fb9",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "e8a3cf91778081e0c822277951c0f5cbc6293c21",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Rob Clarke (HP)",
            "file": "doc/source/guidelines/dg_move-data-securely.rst",
            "history": [
                {
                    "commit": "5d57e38c71ec7fd9113bb5574c143b4b146b95d5",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "6d266e56a1b4b746deab7b26d6b8bf96774dca72",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Tim Kelsey",
            "file": "doc/source/guidelines/dg_parameterize-database-queries.rst",
            "history": [
                {
                    "commit": "70969ec41f9ce5de47090ac7a5cb4e303671b756",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Eric Brown (VMware)",
            "file": "doc/source/guidelines/dg_protect-sensitive-data-in-files.rst",
            "history": [
                {
                    "commit": "038d65e6fd8c17dffa28f55c99e218d09802f5ec",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "8094050151347ac83b6a05c5895dc9f670b70c29",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Eric Brown (VMware)",
            "file": "doc/source/guidelines/dg_validate-certificates.rst",
            "history": [
                {
                    "commit": "f74fecc0360a9f9c6984d821b128c4830bd71b0f",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "2ffd29660d9f48079ab1c9a0de170eb660ba22fe",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Lucas Fisher (Nebula)",
            "file": "doc/source/guidelines/dg_rootwrap-recommendations-and-plans.rst",
            "history": [
                {
                    "commit": "bd5a51411420fd70ee410ff72cb396287ce0c60e",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "6a13d1ecd8ec73f7038c8e3238e24ac32ac60e58",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Jamie Finnigan (HP)",
            "file": "doc/source/guidelines/dg_strong-crypto.rst",
            "history": [
                {
                    "commit": "cefac3ea2ab56e4e7c6238f69f63b7f88d1b58ec",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "4d430b4cf18687b322c931f1bddf8147da853bb5",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Eric Brown (VMware)",
            "file": "doc/source/guidelines/dg_use-oslo-rootwrap-securely.rst",
            "history": [
                {
                    "commit": "921f9fc2d949a6a48ba8c936a81b7f15024e50f8",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "3fc0ca5df47cd118d111e6020e1f1101b3c4fbc3",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Paul McMillan (Nebula)",
            "file": "doc/source/guidelines/dg_use-subprocess-securely.rst",
            "history": [
                {
                    "commit": "7a305304d600b1e0d868e1fa174d31bdb460c041",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "f4ccd51a10ca0380737d6be8bc5ae6c677a60b39",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "Grant Murphy (HP)",
            "file": "doc/source/guidelines/dg_using-file-paths.rst",
            "history": [
                {
                    "commit": "f2d93147e7b8e30750b2f6ad548a1478f74e4489",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "d8593eaea31e9d4756005555dfb85f41013ee6d4",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        },
        {
            "author": "b005cc2968ecbcdbf7cc455ecac1e8ede2b2fa21",
            "file": "doc/source/guidelines/dg_using-temporary-files-securely.rst",
            "history": [
                {
                    "commit": "cf1981af0bba72558b8cfa349d841ceee4722b7f",
                    "id": "1",
                    "repo": "git://github.com/hyakuhei/OSSG-Security-Practices"
                },
                {
                    "commit": "",
                    "id": "2",
                    "repo": "github.com/openstack-security/Developer-Guidance"
                }
            ]
        }
    ]
}

Change-Id: I8ae30f526523860c0fc069cf1e683f1b8a69eb4d
This commit is contained in:
Grant Murphy 2015-05-07 12:16:44 -07:00
parent fea9838fca
commit 84b5e7ec48
17 changed files with 1805 additions and 0 deletions

View File

@ -0,0 +1,106 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Apply Restrictive File Permissions
==================================
Files should be created with restrictive file permissions to prevent
vulnerabilities such as information disclosure and code execution. In
particular, any files which may contain confidential information
should be set to only permit access by the owning user/service and group
(i.e. no world/other access).
Discretion should be used when granting write access to files such as
configuration files to prevent vulnerabilities including denial of
service and remote code execution.
Incorrect
~~~~~~~~~
Consider the configuration file for a service "secureserv" which stores
configuration including passwords in "secureserv.conf".
.. code:: sh
ls -l secureserv.conf
-rw-rw-rw- 1 secureserv secureserv 6710 Feb 17 22:00 secureserv.conf
Here the file permissions are set to 666 (read and write access for
owner, group, and others). This will allow all users on the system to
have access to the sensitive information contained within the
configuration file.
When writing to a file on a \*NIX operating system using Python, the file
output will be written using the umask that is set for the application.
Depending on your operating system configuration this could be too
permissive when writing sensitive data to file.
Given this program:
.. code:: python
with open('testfile.txt', 'w') as fout:
fout.write("secrets!")
The file permissions will default to the environment settings. Here
the umask allows file content to be read by other users on the system.
::
ls -l testfile.txt
-rw-r--r-- 1 user staff 4 Feb 19 10:59 testfile.txt
Correct
~~~~~~~
It is preferable to set restrictive permissions for files containing
sensitive information. To fix the example above you would need to set
permissions to make the file only readable and writeable by the user
that created it.
.. code:: sh
chmod 0600 securesev.conf
ls -l secureserv.conf
-rw------- 1 secureserv secureserv 6710 Feb 17 22:00 secureserv.conf
Below is an example of how you could securely create a file in Python
which is only readable and writable by the owner of that file.
.. code:: python
import os
flags = os.O_WRONLY | os.O_CREAT | os.EXLC
with os.fdopen(os.open('testfile.txt', flags, 0600), 'w') as fout:
fout.write("secrets!")
Note that it is also important to verify the owner and group of the
file. It is particularly important to note which other users are part of a
group that you grant access to. The best practice is that if group access is
not needed, do not grant it. This is the principle of least privilege.
Consequences
------------
- Reading passwords from the config file - A malicious user can read
sensitive information (such as passwords) from the file.
- Setting a new password - A malicious user can write a new password
into the file, potentially granting access.
- Code execution - If the config file stores commands or parameters, a
malicious user could tamper with the config file to achieve code
execution.
- Denial of service - An attacker could delete the contents of the file
to
prevent the service from running properly.
References
----------
- File system controls should be implemented according to `least
privilege <http://en.wikipedia.org/wiki/Principle_of_least_privilege>`__.
- More information about setting securing file system permissions in
Linux can be found
`here <http://www.linuxsecurity.com/docs/SecurityAdminGuide/SecurityAdminGuide-8.html>`__.

View File

@ -0,0 +1,122 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Avoid dangerous file parsing and object serialization libraries
===============================================================
Many common libraries that are often used for reading configuration
files and deserializing objects are very dangerous because they can
allow execution of arbitrary code. By default, libraries such as
PyYAML and pickle do not provide strong separation of data and code, and thus
allow code to be embedded inside the input.
Often the input to these libraries is untrusted or only partially
trusted. These unsafe inputs can come from configuration files or be
provided via REST APIs. For example, we often use YAML for
configuration files but YAML files can also contain embedded Python code.
This may provide an attacker with a method to execute code.
Many, but not all, of these libraries, offer safe interfaces that
disable features that enable code execution. You always want to use
the safe functions to load input. Often the obvious function to use is not
the safe one and we should check the documentation for libraries not covered
here.
Python Libraries
~~~~~~~~~~~~~~~~
We often use YAML, pickle, or eval to load data into our Python
programs, but this is dangerous. PyYAML has a safe way to load code,
but pickle and eval do not.
+----------+---------------------------------------------+-------------------+-----------------------------+
| Module | Problem | Use | Avoid |
+==========+=============================================+===================+=============================+
| PyYAML | Allows creating arbitrary Python objects. | yaml.safe\_load | yaml.load |
+----------+---------------------------------------------+-------------------+-----------------------------+
| pickle | Allows creating arbitrary Python objects. | Do not use | pickle.load, pickle.loads |
+----------+---------------------------------------------+-------------------+-----------------------------+
| cPickle | Allows creating arbitrary Python objects. | Do not use | cPickle.load, cPickle.loads |
+----------+---------------------------------------------+-------------------+-----------------------------+
| eval | Runs all input as Python code | Do not use | eval |
+----------+---------------------------------------------+-------------------+-----------------------------+
| exec | Runs all input as Python code (Python 3.x) | Do not use | exec |
+----------+---------------------------------------------+-------------------+-----------------------------+
Incorrect
~~~~~~~~~
yaml.load is the obvious function to use but it is dangerous:
.. code:: python
import yaml
import pickle
conf_str = '''
!!python/object:__main__.AttackerObj
key: 'value'
'''
conf = yaml.load(conf_str)
Using pickle or cPickle with untrusted input can result in arbitrary
code execution.
.. code:: python
import pickle
import cPickle
user_input = "cos\nsystem\n(S'cat /etc/passwd'\ntR.'\ntR."
cPickle.loads(user_input) # results in code execution
pickle.loads(user_input) # results in code execution
Similarly eval and exec are difficult to use safely with
input that comes from an untrusted source.
.. code:: python
user_input = "os.system('cat /etc/passwd')"
eval(user_input) # execute python expressions
user_input = "import os; os.system('cat /etc/passwd')"
exec(user_input) # execute _any_ python code
Correct
~~~~~~~
Here we use PyYAMLs safe YAML loading function:
.. code:: python
import yaml
conf_str = '''
- key: 'value'
- key: 'value'
'''
conf = yaml.safe_load(conf_str)
There is no safe alternative for pickle.load. However in most cases using
pickle for serialization of data objects is something that can be avoided
altogether.
Consequences
------------
- Anyone that can control the input passed to dangerous libraries can
gain arbitrary code execution on the system running the dangerous
library.
References
----------
- `PyYAML: Loading
YAML <http://pyyaml.org/wiki/PyYAMLDocumentation#LoadingYAML>`__
- `Why Python Pickle is
Insecure <http://michael-rushanan.blogspot.com/2012/10/why-python-pickle-is-insecure.html>`__
- `Exploiting misuse of Python's
"pickle" <https://blog.nelhage.com/2011/03/exploiting-pickle/>`__

View File

@ -0,0 +1,110 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Python Pipes to Avoid Shells
============================
You should take a look at the :doc:`shell injection document <dg_use-subprocess-securely>` before this one.
A lot of the time, our codebase uses ``shell=True`` because it's
convenient. The shell provides the ability to pipe things around
without buffering them in memory, and allows a malicious user to chain
additional commands after a legitimate command is run.
Incorrect
~~~~~~~~~
Here is a simple function that uses curl to grab a page from a website, and
pipe it directly to the ``wordcount`` program to tell us how many
lines there are in the HTML source code.
.. code:: python
def count_lines(website):
return subprocess.check_output('curl %s | wc -l' % website, shell=True)
#>>> count_lines('www.google.com')
#'7\n'
(That output is correct, by the way - the google html source does have
7 lines.)
The function is insecure because it uses ``shell=True``, which allows
:doc:`shell injection <dg_use-subprocess-securely>`. A user to who instructs your
code to fetch the website ``; rm -rf /`` can do terrible things to what
used to be your machine.
If we convert the function to use ``shell=False``, it doesn't work.
.. code:: python
def count_lines(website):
args = ['curl', website, '|', 'wc', '-l']
return subprocess.check_output(args, shell=False)
# >>> count_lines('www.google.com')
# curl: (6) Could not resolve host: |
# curl: (6) Could not resolve host: wc
# Traceback (most recent call last):
# File "<stdin>", line 3, in count_lines
# File "/usr/lib/python2.7/subprocess.py", line 573, in check_output
# raise CalledProcessError(retcode, cmd, output=output)
# subprocess.CalledProcessError: Command
# '['curl', 'www.google.com', '|', 'wc', '-l']' returned non-zero exit status 6
The pipe doesn't mean anything special when shell=False, and so curl
tries to download the website called '\|'. This does not fix the
issue, rather it causes it to be more broken than before.
If we can't rely on pipes if we have shell=False, how should we do this?
Correct
~~~~~~~
.. code:: python
def count_lines(website):
args = ['curl', website]
args2 = ['wc', '-l']
process_curl = subprocess.Popen(args, stdout=subprocess.PIPE,
shell=False)
process_wc = subprocess.Popen(args2, stdin=process_curl.stdout,
stdout=subprocess.PIPE, shell=False)
# Allow process_curl to receive a SIGPIPE if process_wc exits.
process_curl.stdout.close()
return process_wc.communicate()[0]
# >>> count_lines('www.google.com')
# '7\n'
Rather than calling a single shell process that runs each of our
programs, we run them separately and connect stdout from curl to stdin
for wc. We specify ``stdout=subprocess.PIPE``, which tells subprocess
to send that output to the respective file handler.
Treat pipes like file descriptors (you can actually use FDs if you want)
they may block on reading and writing if nothing is connected to the
other end. That's why we use ``communicate()``, which reads until EOF
on the output and then waits for the process to terminate. You should
generally avoid reading and writing to pipes directly unless you
really know what you're doing - it's easy to work yourself into a situation
that can deadlock.
Note that ``communicate()`` buffers the result in memory - if that's not
what you want, use a file descriptor for ``stdout`` to pipe that output
into a file.
Consequences
------------
- Using pipes helps you avoid shell injection in more complex
situations
- It's possible to deadlock things with pipes (in Python or in shell)
References
----------
- https://docs.python.org/2/library/subprocess.html#subprocess.Popen.stdin

View File

@ -0,0 +1,96 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Unvalidated URL redirect
========================
It is common for web forms to redirect to a different page upon
successful submission of the form data. This is often done using a
*next* or *return* parameter in the HTTP request. Any HTTP parameter
can be controlled by the user, and could be abused by attackers to
redirect a user to a malicious site.
This is commonly used in phishing attacks, for example an attacker
could redirect a user from a legitimate login form to a fake,
attacker controlled, login form. If the page looks enough like the
target site, and tricks the user into believing they mistyped their
password, the attacker can convince the user to re-enter their
credentials and send them to the attacker.
Here is an example of a malicious redirect URL:
::
https://good.com/login.php?next=http://bad.com/phonylogin.php
To counter this type of attack all URLs must be validated before being
used to redirect the user. This should ensure the redirect will take
the user to a page within your site.
Incorrect
~~~~~~~~~
This example just processes the 'next' argument with no validation:
.. code:: python
import os
from flask import Flask,redirect, request
app = Flask(__name__)
@app.route('/')
def example_redirect():
return redirect(request.args.get('next'))
Correct
~~~~~~~
The following is an example using the Flask web framework. It checks
that the URL the user is being redirected to originates from the same
host as the host serving the content.
.. code:: python
from flask import request, g, redirect
from urlparse import urlparse, urljoin
def is_safe_redirect_url(target):
host_url = urlparse(request.host_url)
redirect_url = urlparse(urljoin(request.host_url, target))
return redirect_url.scheme in ('http', 'https') and \
host_url.netloc == redirect_url.netloc
def get_safe_redirect():
url = request.args.get('next')
if url and is_safe_redirect_url(url):
return url
url = request.referrer
if url and is_safe_redirect_url(url):
return url
return '/'
The Django framework contains a `django.utils.http.is\_safe\_url <https://github.com/django/django/blob/93b3ef9b2e191101c1a49b332d042864df74a658/django/utils/http.py#L268>`__
function that can be used to validate redirects without implementing a custom version.
Consequences
~~~~~~~~~~~~
- Unvalidated redirects can make your site a target for phishing
attacks that can lead to users' credentials being stolen.
- `OSSA-2012-012 <http://security.openstack.org/ossa/OSSA-2012-012.html>`__
References
~~~~~~~~~~
- `CWE-601: URL Redirection to Untrusted
Site <http://cwe.mitre.org/data/definitions/601.html>`__
- `OWASP Top 10 2013 - Unvalidated redirects and
forwards <https://www.owasp.org/index.php/Top_10_2013-A10-Unvalidated_Redirects_and_Forwards>`__

View File

@ -0,0 +1,139 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Use CSRF tokens to avoid CSRF attacks
=====================================
Cross-Site Request Forgery (CSRF) a.k.a. session riding
occurs when sensitive web services have no protection to prevent
attackers arbitrarily submitting data and commands on a website a
user trusts. For example, an attacker may be able to cause an
authorized user to submit form data to a web service which performs
administrative functionality, or modifies personal settings.
Incorrect
~~~~~~~~~
In a typical attack scenario a victim with an existing session of
a legitimate site is tricked into visiting a malicious site
that leverages this session to trigger some action. The snippet below
provides an example of how an attacker might execute such an attack
on a site that doesn't have CSRF token protection in place. The code
below would be hosted on a attackers site, but execute a transfer
on a legitimate users site.
.. code:: html
<!-- bank.evil.com site -->
<html>
<head>
<title>Malicious website</title>
</head>
<body>
<!-- hehe i'm gonna be rich -->
<h1>Enter your name to find out your funny nickname:</h1>
<form action="http://legitimate-site.com/transferfunds" method="POST">
Your name: <input type="text"><br/>
<input type="submit">
<input type="hidden" name="amount" value="10000">
<input type="hidden" name="recipient" value="evil_hacker">
</form>
</body>
</html>
To protect against this issue, a cryptographically secure nonce or
hash must be included with each request, which must be verified prior to
performing sensitive functionality.
Correct
~~~~~~~
Most web frameworks will provide middleware that is capable of generating
secure CSRF tokens for you. Some of the major ones are summarized below.
Django
^^^^^^
1. Add 'django.middleware.csrf.CsrfViewMiddleware' to MIDDLEWARE_CLASSES in
your configuration *BEFORE* any other middleware that assumes that CSRF
attacks have been dealt with.
2. In any template that user can submit POST data you need to add a special
csrf_token tag.
.. code:: html
<form action="." method="POST">
{% csrf_token %}
<!-- other form fields -->
</form>
3. Ensure that the csrf context is being passed into any views
being rendered. Is you are not using a RequestContext you will
need to do this manually.
Refer to the `Django documentation <https://docs.djangoproject.com/en/1.8/ref/csrf/#using-csrf>` for more detailed instructions.
Flask-WTF
^^^^^^^^^
Without any configuration, a flask_wtf.Form will be a session secure form
with csrf protection.
Given the following form definition:
.. code:: python
# form.py
from flask_wtf import Form
from wtforms import StringField
from wtforms.validators import DataRequired
class PersonForm(Form):
name = StringField('name', validators=[DataRequired()])
You need only include the form.hidden_tag() in your template definition:
.. code:: html
<!-- view.html -->
<form method="POST" action="{{ url_for('submit') }}">
{{ form.hidden_tag() }}
{{ form.name.label }} {{ form.name(size=20 }}
<input type="submit" value="ok">
</form>
The CSRF token will be validated automatically in any place
you are validating the form submission.
.. code:: python
# view.py
from form import PersonForm
@app.route("/submit", methods=("GET", "POST"))
def submit():
f = PersonForm()
if form.validate_on_submit():
# csrf token also validated
return redirect("/")
return render_template("view.html", form=f)
Consequences
------------
- Legitimate user sessions can be hijacked
- Privileged services and functionality can be accessed
- Protected data can be modified
References
----------
- `OWASP CSRF Guide <https://www.owasp.org/index.php/CSRF>`__

View File

@ -0,0 +1,176 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Escape user input to prevent XSS attacks
========================================
These days, almost every service we create has some form of web
interface, be it for administration, monitoring or for the core
functionality of the service. These interfaces are becoming ever
more complex and dynamic, and increasingly interactive. There is
a risk however, when increasing interactivity of these web services,
that we inadvertently allow a user to supply data which can corrupt,
or disrupt the normal running of that service.
Cross-Site Scripting (XSS) is a class of vulnerability whereby an attacker
is able to present active web content to a web service, which is
subsequently echoed back to a user and executed by the browser.
This content can be as seemingly benign as an embarrassing image or
text, or as malign as browser based exploits intended to steal and
utilize the user's web session, or even compromise the user's web browser
and take control of their client system.
There are three main classes of XSS issue: Persistent, Reflected and
DOM-Based. Persistent XSS issues are those where user input is stored
by the server, either in a database or server files, which is later
presented to any user visiting the affected web page. Reflected XSS
issues are those where user input in a request is immediately
reflected to the user without sanitization.
DOM-Based issues are less common, and are present in web applications
with rich client-side JavaScript clients which generate dynamic code or web
content using user controllable data (i.e. URL parameters).
When developing web applications, we must be extremely careful to
protect against all these classes of issue. To do so, we must never trust any
data that originates from, or can be controlled by, the client. All data
must be sanitized in a way suitable for how that data is going to be used. To
do so, many languages provide built-in functionality to make sure any
potentially dangerous control characters are encoded in a way to render them
inactive. The following is a PHP example of this.
Incorrect
~~~~~~~~~
The following is a contrived example of how a reflected XSS exploit may
occur. If an attacker were to submit a request to
'http://example.com/?name=<script>alert(1)</script>' then any user viewing that
url would have the javascript executed within the context of their browser. This can
be used for malicious purposes.
.. code:: python
# flask example
@app.route("/")
def hello():
name = request.args.get('name')
return "Hello %s" % name
Most modern Python web frameworks will escape any input that is rendered
via templates which mitigates the majority of these types of attacks.
However there are ways that this can be disabled.
<!-- by default flask will html escape var -->
<p>{{ var }}</p>
<!-- in this instance it will not! -->
<p>{{ var | safe }}</p>
Correct
~~~~~~~
The correct way to prevent XSS attacks is to validate user input and ensure
that data rendered by templates is escaped. Using templates in the way
they are intended is preferable:
.. code:: python
# flask example
@app.route("/")
def hello():
name = request.args.get('name')
return render('hello.html', name=name)
# where hello.html is:
# <html>Hello {{ name }}</html>
Any HTML content that is generated directly within a request handler
should use the appropriate escaping function:
.. code:: python
from flask import escape
@app.route("/")
def hello():
name = request.args.get('name')
return "Hello %s" % escape(name)
Allowing certain special characters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The issue is made more complex when we encounter situations where we
need to allow a specific set of special characters, such as the ability to
post content containing HTML tags. In this situation we can either accept only
known good data, or we can deny all known bad data. Both approaches have pros and
cons, with the specific choice of implementation being dependent on the
given application. In general however, the following should be the list of
priorities:
#. Encoding - Replace ALL control characters with known safe
alternatives
#. Positive validation (whitelist) - Only allow a specific set of values
#. Negative validation (blacklist) - Block a specified list of dangerous
values
In cases where positive validation is used, it should also be coupled
with additional sanitization. For example, when allowing certain HTML tags,
certain attributes of those tags should be removed, such as event handlers.
e.g.:
.. code:: html
<img src='someimage.jpg' onload='do_evil()'/>
Again, the preferable approach is to only allow known safe attributes,
and sanitize the content of those attribute values. If the content is not
sanitized, the following vulnerable code could occur:
.. code:: javascript
function add_image(link) {
document.write('<img src="' + link + '"'></img>'');
}
If the preceding JavaScript function is called with the link parameter
containing the following value, the function can be exploited to execute
arbitrary code:
::
x" onerror="do_evil()
A more secure implementation of the above would be:
.. code:: javascript
function add_image(link) {
clean = link.replace(/"/g, '&quot;');
document.write('<img src="' + clean + '"'></img>'');
}
Note, this is a very specific example for illustration. A more
comprehensive approach to sanitization should be taken for larger applications.
Consequences
------------
- Hijack of legitimate user sessions
- Disclosure of sensitive information
- Access to privileged services and functionality
- Delivery of malware and browser exploits from our trusted domain
References
----------
- `OWASP XSS
Guide <https://www.owasp.org/index.php/Cross-site_Scripting_%28XSS%29>`__
- `OWASP Data Validation
Guide <https://www.owasp.org/index.php/Data_Validation>`__

View File

@ -0,0 +1,74 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Use secure channels for transmitting data
=========================================
Data in transit over networks should be protected wherever possible.
Although some data may not appear to have strong confidentiality or
integrity requirements it is best practice to secure it.
When building any application that communicates over a network we have
to assume that we do not control the network that the data travels
over. We should consider that the network may have hostile actors who will
attempt to view or change the data that we are transmitting.
 Clear Example
~~~~~~~~~~~~~~
OpenStack API calls often contain credentials or tokens that are very
sensitive. If they are sent in plaintext they may be modified or
stolen.
It is very important that API calls are protected from malicious third
parties viewing them or tampering with their content - even for
communications between services on an internal network.
 Less Obvious Example
~~~~~~~~~~~~~~~~~~~~~
Consider a server process that reports the current number of stars in
the sky and sends the data over the network to clients using a simple
webpage. There is no strong confidentiality requirement for this; the
data is not secret. However integrity is important. An attacker on the
network could alter the communications going from the server to
clients and inject malicious traffic such as browser exploits into the HTTP
stream, thus compromising vulnerable clients.
Incorrect
~~~~~~~~~
.. code:: python
cfg.StrOpt('protocol',
default='http',
help='Default protocol to use when connecting to glance.'),
Correct
~~~~~~~
.. code:: python
cfg.StrOpt('protocol',
default='https',
help='Default protocol to use when connecting to glance.'),
Consequences
~~~~~~~~~~~~
- Unencrypted secrets can be stolen
- Unsecured connections can lead to system compromise
- A man-in-the-middle attacker can alter data over unsecure connections
- A less knowledgeable deployer of OpenStack may inadvertently use
unsecure connections on a public network.
References
~~~~~~~~~~
- `Securing Rest
APIs <https://stormpath.com/blog/secure-your-rest-api-right-way/>`__
- `Further reasons to encrypt web
traffic <http://chapterthree.com/blog/why-your-site-should-be-using-https>`__

View File

@ -0,0 +1,180 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Parameterize Database Queries
=============================
Often we write code that interacts with a database using parameters
provided by the application's users. These parameters include
credentials, resource identifiers and other user-supplied data.
Care must be taken when dynamically creating database queries to
prevent them being subverted by user supplied malicious input, this is
generally referred to as SQL injection (SQLi). SQL injection works because the
user input changes the logic of the SQL query, resulting in behaviour
that is not intended by the application developer.
The results of a successful SQL injection attack can include
disclosure of sensitive information such as user passwords, modification or
deletion of data, and gaining execution privileges, which would allow
an attacker to run arbitrary commands on the database server.
SQL injection can typically be mitigated by using some combination of
`prepared statements <https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet#Defense_Option_1:_Prepared_Statements_.28Parameterized_Queries.29>`__
, `stored procedures <https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet#Defense_Option_2:_Stored_Procedures>`__ and
`escaping <https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet#Defense_Option_3:_Escaping_All_User_Supplied_Input>`__
of user supplied input. Most secure web applications will use all three
and we have described their use below.
Code Examples
-------------
SQLAlchemy
~~~~~~~~~~
Incorrect
^^^^^^^^^
This example uses the built-in parameter substitution mechanism '%' to
insert a value into the query string, this will perform an unsafe
literal insertion and not provide any escaping.
.. code:: python
import sqlalchemy
connection = engine.connect()
myvar = 'jsmith' # our intended usage
myvar = 'jsmith or 1=1' # this will return all users
myvar = 'jsmith; DROP TABLE users' # this drops (removes) the users table
query = "select username from users where username = %s" % myvar
result = connection.execute(query)
for row in result:
print "username:", row['username']
connection.close()
Correct
^^^^^^^
This example uses SQLAlchemy's built in parameter substitution
mechanism to safely replace the ':name' variable with a provided value.
.. code:: python
import sqlalchemy
connection = engine.connect()
myvar = 'jsmith' # our intended usage
myvar = 'jsmith or 1=1' # only matches this odd username
query = "select username from users where username = :name"
result = connection.execute(query, name = myvar)
for row in result:
print "username:", row['username']
connection.close()
MySQL
~~~~~
Incorrect
^^^^^^^^^
Without using any escaping mechanism, potentially unsafe queries can
be created.
.. code:: python
import MySQLdb
query = "select username from users where username = '%s'" % name
con = MySQLdb.connect('localhost', 'testuser', 'test623', 'testdb');
with con:
cur = con.cursor()
cur.execute(query)
Correct
^^^^^^^
In this example the query is created using pythons standard, unsafe
'%' operator. MySQL's 'escape\_string' method is used to perform escaping
on the query string immediately before executing it.
.. code:: python
import MySQLdb
query = "select username from users where username = '%s'" % name
con = MySQLdb.connect('localhost', 'testuser', 'test623', 'testdb');
with con:
cur = con.cursor()
cur.execute(MySQLdb.escape_string(query))
An alternative, but also correct, way to do this using a parameterized
query might look like the following:
.. code:: python
import MySQLdb
query = "select username from users where username = '%s'"
con = MySQLdb.connect('localhost', 'testuser', 'test623', 'testdb');
with con:
cur = con.cursor()
cur.execute(query, (username_value,))
This works because the logic of the query is compiled before the user
input is considered.
PostgreSQL (Psycop2)
~~~~~~~~~~~~~~~~~~~~
Incorrect
^^^^^^^^^
This example uses python's unsafe default parameter substitution
mechanism to build a query string. This will not perform any escaping,
unlike the correct example below the string is processed and passed as
a single parameter to 'execute'.
.. code:: python
import psycopg2
conn = psycopg2.connect("dbname=test user=postgres")
cur = conn.cursor()
cur.execute("select username from users where username = '%s'" % name)
Correct
^^^^^^^
This example uses Psycop2's parameter substitution mechanism to build
a query string. Despite the use '%' to indicate the substitution token,
it is not the same as Python's built in string operator %. Note the
value(s) are passed as parameters to 'execute' separately.
.. code:: python
import psycopg2
conn = psycopg2.connect("dbname=test user=postgres")
cur = conn.cursor()
cur.execute("select username from users where username = '%s'", (name,))
Consequences
~~~~~~~~~~~~
- Potential for full disclosure of data
- Potential for remote code execution
References
~~~~~~~~~~
- `More information about SQL
Injection <https://www.owasp.org/index.php/SQL_Injection>`__
- `SQL Injection
Prevention <https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet>`__

View File

@ -0,0 +1,63 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Protect sensitive data in config files from disclosure
======================================================
It is preferable to avoid storing sensitive information in configuration files,
but there are occasions where this is unavoidable. For those situations,
oslo.config provides a useful mechanism by which those sensitive pieces of
information can be sanitized and protected.
In order to trigger this santization, a 'secret=True' flag must be added to the
'cfg.StrOpt()' function when registering the oslo configuration.
An example of this practice is provided below.
Incorrect
~~~~~~~~~
In the example below the password 'secrets!' will be loaded through the cfg.StrOpt() function that could otherwise be logged and disclosed to anyone with access to the log file (legitimate or not).
.. code:: python
cfg.StrOpt('password',
help='Password of the host.'),
Correct
~~~~~~~
A correct code example:
.. code:: python
cfg.StrOpt('password',
help='Password of the host.',
secret=True),
Consequences
~~~~~~~~~~~~
If sensitive information options are logged without being marked as secret, that
sensitive information would be exposed whenever the logger debug flag is
activated.
Example Log Entries
~~~~~~~~~~~~~~~~~~~
::
2015-02-18 20:46:48.928 25351 DEBUG nova.openstack.common.service [-] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python2.7/dist-packages/nova/openstack/common/service.py:166
2015-02-18 20:46:48.937 25351 DEBUG nova.openstack.common.service [-] Configuration options gathered from: log_opt_values /usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1982
2015-02-18 20:46:51.482 25351 DEBUG nova.openstack.common.service [-] host.ip = 127.0.0.1 log_opt_values /usr/lib/python2.7/dist-packages/oslo/config/cfg.py:2002
2015-02-18 20:46:51.491 25351 DEBUG nova.openstack.common.service [-] host.port = 443 log_opt_values /usr/lib/python2.7/dist-packages/oslo/config/cfg.py:2002
2015-02-18 20:46:51.502 25351 DEBUG nova.openstack.common.service [-] host.username = root log_opt_values /usr/lib/python2.7/dist-packages/oslo/config/cfg.py:2002
2015-02-18 20:46:51.486 25351 DEBUG nova.openstack.common.service [-] host.password = secrets! log_opt_values /usr/lib/python2.7/dist-packages/oslo/config/cfg.py:2002
References
~~~~~~~~~~
- http://docs.openstack.org/developer/oslo.config/cfg.html#special-handling-instructions

View File

@ -0,0 +1,120 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Using Rootwrap in OpenStack
===========================
Most rootwrap filters are overly permissive, and allow running commands as root
with no additional filtering on the arguments given, and little sanitization is
done of the input commands.
Additionally, maintaining command filters is difficult. We have found that
unused filters are left in place after they should have been removed.
Finally, rootwrap cannot easily express precise semantics of the use cases of
privileged commands.
To Privilege or Not to Privilege, That is the Question
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before attempting to complete a privileged task with best practice, the task
should be analyzed to see if the task can be completed in an unprivileged
fashion. For example, using service accounts with file permissions and
ownership for access instead of running a command as root.
Additionally, files can be owned by the project group instead of the root
group. These cases are preferred solutions where an architectural change is
preferred, and using privileged access can introduce security issues.
Do the Necessary
~~~~~~~~~~~~~~~~
Rootwrap currently has several deficiencies:
- Auditing of rootwrap filters
- Creation of new filter classes to reflect the semantics of the
command
- Existing CommandFilters replaced with RegExpFilters
- Auditing the resulting complex regular expressions reliably
Instead, a better approach is multi-faceted:
- Create an abstraction between the tasks being performed and the commands
being run
- Utilize available privilege separation mechanisms currently available
in the Linux operating system
- Move away from calling external commands where possible
- Permit santiization of input to privileged tasks
- Increase ease of use around auditing privileged task implementations
- Permit each project to have domain-specific tasks
- Share common tasks with other projects
- Place minimum burden on developers while still creating better
privilege separation
These guidelines will allow better semantic filtering of arguments being passed
for each task. For example, if we need to mount an image or device in a
specific filesystem sub-tree the caller does not need to pass the full path of
the mount point. Additionally, each external command or library will have its
own interpretation of arguments, and we need to be able to sanitize those in
task-specific and use-specific ways.
Auditing the filtering and sanitization code is necessary as well, and this
needs to be easier to do so it can be done as needed. Sharing common tasks will
allow easier auditing as well as contribute to code centralization and re-use.
Looking Forward
~~~~~~~~~~~~~~~
Within the OpenStack project, understanding how to better limit the usage of
general commands is needed. For example, DeleteLink should not be able to
delete an arbitrary path. Discussions are converging around a privileged
daemon, similar to the one recently proposed for neutron. This would give
several advantages including better SELinux and AppArmor profiles as the
necessary privileges would be clearly stated in the code. Another area this
would help is in system calls, both direct and external. Unfortunately, this
would seemingly contradict the last bullet outlined above to build a better
approach. It is, however, necessary to secure the OpenStack project.
The alternative is to accept that projects (such as nova-compute) will need to
run as root, and as such will need to be fully audited - including rootwrap
filters. SELinux and AppArmor profiles will be the responsibility of the
deployer to create and maintain.
Next Steps
~~~~~~~~~~
All calls to rootwrap, or project-specific interfaces to rootwrap, should be
migrated to interfaces in a project specific privileged task module. This will
still call rootwrap but will allow for better sanitization, identification of
what can be consolidated into shared code, and will allow easier migration to a
future solution.
The current codebase should be audited to determine if there are any specific
places that can be re-architected to avoid running tasks as root. A Bandit
plugin that looked for the use of rootwrap would allow easy identification of
the code that needs to be audited.
Better documentation around how to write safe filters for rootwrap would allow
for developers to become better educated, but would also give reviewers a
citable document to link against in Gerrit.
The neutron privilege separation daemon should be supported to build experience
and understand best practices around such an implementation.
When a final solution is determined the implementation should be
audited regularly to ensure it is being used correctly and the results are as
expected.
References
~~~~~~~~~~
- `Original rootwrap discussion from the OpenStack developer
mailinglist <http://lists.openstack.org/pipermail/openstack-dev/2015-February/055971.html>`__
- `Neutron privileged daemon
proposal <https://review.openstack.org/#/c/155631>`__
- `Bandit <https://wiki.openstack.org/wiki/Security/Projects#Bandit_Source_Code_Analyzer>`__
- (insert AppArmor profile development resources here)
- (insert SELinux context reference here)

View File

@ -0,0 +1,83 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Use Strong and Established Cryptographic Elements
=================================================
Cryptography is a complex topic that is frequently misunderstood and is the
area of significant debate. The specifics mentioned in this guide are likely to
change as state of the art continues to advance.
In general, you should follow some simple rules for using cryptography:
- Do not invent your own cryptography, use existing algorithms and
implementations.
- When utilizing cryptographic hashing, signing, or encryption, strong
cryptographic primitives must be used.
- Use established, reputable libraries with active maintenance in
preference to implementing your own algorithms.
- Pay carefull attention to key management and distribution, this is
generally a harder problem than algorithm selection and
implementation.
Cryptography should be used to solve a specific problem or mitigate a specific
threat, such as ensuring the confidentiality of some data in transit over an
un-trusted network connection. Both the cryptographic algorithm and the key
strength should be appropriate for the threat you are trying to mitigate with
the encryption, and the limitations of the cryptography should be understood.
For example, if encryption is applied to a network link, it will not protect
the data when it is processed or stored at either end of that link. When
deploying cryptography, the impact to system performance and availability must
also be considered.
The Python cryptography libraries currently in OpenStack global requirements
include:
- `PyCrypto <https://www.dlitz.net/software/pycrypto/>`__,
- `pyOpenSSL <https://github.com/pyca/pyopenssl>`__,
- `cryptography <https://cryptography.io/>`__, and
- `passlib <https://pythonhosted.org/passlib/>`__.
Use of the following cryptographic elements is encouraged:
- SHA-256 is the preferred hashing algorithm.
- AES is the preferred general encryption algorithm, with 128, 192 or 256 bit
key lengths.
- HMAC is the preferred signing construction, in conjunction with a preferred
hashing algorithm.
- TLSv1.2 or TLSv1.1 are preferred for protecting data in transit between
clients and web services, but they must be configured securely, certificate
validity, expiry and revocation status must be checked.
While for some use cases it may seem appropriate to use a weaker cryptographic
element, the options listed above are generally advised.
Usage of the following is strongly discouraged:
- MD5
- DES
- RC4
- SSLv2, SSLv3, TLSv1.0
Consequences
~~~~~~~~~~~~
Weak cryptographic elements may be vulnerable to various types of attack,
ultimately affecting confidentiality and integrity of the associated system or
dataset at risk.
References
~~~~~~~~~~
- `OWASP Guide to
Cryptography <https://www.owasp.org/index.php/Guide_to_Cryptography>`__
- `NSA Suite B
Cryptography <https://www.nsa.gov/ia/programs/suiteb_cryptography/index.shtml>`__
- `NIST Cryptographic
Toolkit <http://csrc.nist.gov/groups/ST/toolkit/>`__
- `Server Side
TLS <https://wiki.mozilla.org/Security/Server_Side_TLS>`__

View File

@ -0,0 +1,62 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Use oslo rootwrap securely
==========================
Rootwrap provides a mechanism in which a caller may execute a command line with
escalated privileges (typically as root). Special care must be taken to ensure
this use of code does not permit a lesser privileged user to run commands as
root.
Rootwrap provides a series of filters to restrict command usage to limit the
exposure. The most commonly used filter is CommandFilter, but it also provides
the least amount of restriction on how the command is called.
Consider the following before using rootwrap:
- Evaluate whether running the command as root is necessary. In some
cases, another user may be more appropriate.
- Try to avoid using rootwrap. Look for Python native implementations
of the required functionality rather than running operating system
commands.
- If rootwrap is required, use the most restrictive filter possible
(typically RegExpFilter).
Incorrect
~~~~~~~~~
An example of insecure usage:
.. code:: python
# nova/virt/disk/vfs/localfs.py: 'chmod'
chmod: CommandFilter, chmod, root
This filter is too permissive because it allows the chmod command to be run as
root with any number of arguments, on any file.
Correct
~~~~~~~
An example of more secure usage:
.. code:: python
# nova/virt/libvirt/utils.py: 'blockdev', '--getsize64', path
# nova/virt/disk/mount/nbd.py: 'blockdev', '--flushbufs', device
blockdev: RegExpFilter, blockdev, root, blockdev, (--getsize64|--flushbufs), /dev/.*
Consequences
~~~~~~~~~~~~
- A user of the API could potentially inject input that might be executed at
high privileges due to an in-effective rootwrap configuration.
References
~~~~~~~~~~
- https://wiki.openstack.org/wiki/Rootwrap

View File

@ -0,0 +1,86 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Use subprocess securely
=======================
Many common tasks involve interacting with the operating system - we write a
lot of code that configures, modifies, or otherwise controls the system, and
there are a number of pitfalls that can come along with that.
Shelling out to another program is a pretty common thing to want to do. In most
cases, you will want to pass parameters to this other program. Here is a simple
function for pinging another server.
Incorrect
~~~~~~~~~
.. code:: python
def ping(myserver):
return subprocess.check_output('ping -c 1 %s' % myserver, shell=True)
>>> ping('8.8.8.8')
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=5.82 ms
This program just supplies a string as a command to the shell, which runs it
without thinking too hard about it. There's no semantic separation between the
input parameters, i.e. the shell cannot tell where the command is supposed to
end, and where the parameters start.
If the ``myserver`` parameter is user controlled, this can be used to execute
arbitrary programs, such as rm:
.. code:: python
>>> ping('8.8.8.8; rm -rf /')
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=6.32 ms
rm: cannot remove `/bin/dbus-daemon': Permission denied
rm: cannot remove `/bin/dbus-uuidgen': Permission denied
rm: cannot remove `/bin/dbus-cleanup-sockets': Permission denied
rm: cannot remove `/bin/cgroups-mount': Permission denied
rm: cannot remove `/bin/cgroups-umount': Permission denied
...
If you choose to test this, we recommend that you pick a command that is less
destructive than 'rm -rf /', such as 'touch helloworld.txt'.
Correct
~~~~~~~
This function can be re-written safely:
.. code:: python
def ping(myserver):
args = ['ping', '-c', '1', myserver]
return subprocess.check_output(args, shell=False)
Rather than passing a string to subprocess, our function passes a list of
strings. The ping program gets each argument separately (even if the argument
has a space in it), so the shell does not process other commands that are
provided by the user after the ping command terminates. You do not have to
explicitly set shell=False - it is the default.
If we test this with the same input as before, the ping command interprets the
``myserver`` value correctly as a single argument, and complains because that
is a really weird hostname to try and ping.
.. code:: python
>>> ping('8.8.8.8; rm -rf /')
ping: unknown host 8.8.8.8; rm -rf /
This program is now much safer, even if it has to allow user-provided input.
Consequences
~~~~~~~~~~~~
- If you use shell=True, your code is extremely likely to be vulnerable
- Even if *your* code is not vulnerable, the next person who maintains
can easily introduce a vulnerability.
- Shell injections are arbitrary code execution - a competent attacker
will use these to compromise the rest of your system.

View File

@ -0,0 +1,135 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Restrict path access to prevent path traversal
==============================================
Often we will refer to a file on disk or other resource using a path. A path
traversal attack is when an attacker supplies input that gets used with our
path to access a file on the file system that we did not intend. The input
usually attempts to break out of the application's working directory and access
a file elsewhere on the file system. This category of attack can be mitigated
by restricting the scope of file system access and reducing attack surface by
using a restricted file permission profile.
Incorrect
~~~~~~~~~
A typical remote vector for path traversal in web applications might involve
serving or storing files on the file system. Consider the following example:
.. code:: python
import os
from flask import Flask, redirect, request, send_file
app = Flask(__name__)
@app.route('/')
def cat_picture():
image_name = request.args.get('image_name')
if not image_name:
return 404
return send_file(os.path.join(os.getcwd(), image_name))
if __name__ == '__main__':
app.run(debug=True)
As the attacker controls the input that is used directly in constructing a path
they are able to access any file on the system. For example consider what
happens if an attacker makes a request like:
::
curl http://example.com/?image_name=../../../../../../../../etc/passwd
Path traversal flaws also can happen when unpacking a compressed archive of
files. An example of where this has happened within OpenStack is
`OSSA-2011-001 <http://security.openstack.org/ossa/OSSA-2011-001.html>`__. In
this case a tar file from an untrusted source could be unpacked to overwrite
files on the host operating system.
.. code:: python
import tarfile
def untar_image(path, filename):
tar_file = tarfile.open(filename, 'r|gz')
tar_file.extract_all(path)
image_file = tar_file.get_names()[0]
tar_file.close()
return os.path.join(path, image_file)
Correct
~~~~~~~
The following example demonstrates how you can use python code to restrict
access to files within a specific directory. This can be used as a mechanism to
defeat path traversal.
.. code:: python
import os
import sys
def is_safe_path(basedir, path, follow_symlinks=True):
# resolves symbolic links
if follow_symlinks:
return os.path.realpath(path).startswith(basedir)
return os.path.abspath(path).startswith(basedir)
def main(args):
for arg in args:
if is_safe_path(os.getcwd(), path):
print("safe: {}".format(arg))
else:
print("unsafe: {}".format(arg))
if __name__ == "__main__":
main(sys.argv[1:])
Another approach to restricting file system access to maintain an indirect
mapping between a unique identifier and a file path that exists on the
operating system. This prevents users supplying malicious input to access
unintended files.
.. code:: python
localfiles = {
"01" : "/var/www/img/001.png",
"02" : "/var/www/img/002.png",
"03" : "/var/www/img/003.png",
}
# Will raise an error if an invalid key is used.
def get_file(file_id):
return open(localfiles[file_id])
Consequences
~~~~~~~~~~~~
Not validating file paths allows the attacker to read or write to any file
that the application has access to. This can lead to information leakage and
can be used to pivot to other more serious attacks like remote code execution.
- `OSSA-2011-001 <http://security.openstack.org/ossa/OSSA-2011-001.html>`__
- `OSSA-2014-041 <http://security.openstack.org/ossa/OSSA-2014-041.html>`__
- `OSSA-2015-002 <http://security.openstack.org/ossa/OSSA-2015-002.html>`__
References
~~~~~~~~~~
- `CWE-22: Improper Limitation of a Pathname to a Restricted
Directory <http://cwe.mitre.org/data/definitions/22.html>`__
- `OWASP: Path
Traversal <https://www.owasp.org/index.php/Path_Traversal>`__
- `Wikipedia: Directory traversal
attack <http://en.wikipedia.org/wiki/Directory_traversal_attack>`__

View File

@ -0,0 +1,179 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Create, use, and remove temporary files securely
================================================
Often we want to create temporary files to save data that we can't hold in
memory or to pass to external programs that must read from a file. The obvious
way to do this is to generate a unique file name in a common system temporary
directory such as /tmp, but doing so correctly is harder than it seems. Safely
creating a temporary file or directory means following a number of rules (see
the references for more details). We should never do this ourselves but use the
correct existing library function. We also must take care to cleanup our
temporary files even in the face of errors.
If we don't take all these precautions we open ourselves up to a number of
dangerous security problems. Malicious users that can predict the file name and
write to directory containing the temporary file can effectively hijack the
temporary file by creating a symlink with the name of the temporary file before
the program creates the file itself. This allows a malicious user to supply
malicious data or cause actions by the program to affect attacker chosen files.
The references have more extensive descriptions of potential dangers.
Most programming lanuages provide functions to create temporary files. However,
some of these functions are unsafe and should not be used. We need to be
careful to use the safe functions.
Despite the safer temporary file creation APIs we must still be aware of where
we are creating tempory files. Generally, temporary files should always be
created on the local filesystem. Many remote filesystems (for example, NFSv2)
do not support the open flags needed to safely create temporary files.
Python
~~~~~~
=========================== ===============
Use Avoid
=========================== ===============
tempfile.TemporaryFile tempfile.mktemp
tempfile.NamedTemporaryFile open
tempfile.SpoolTemporaryFile
tempfile.mkstemp
tempfile.mkdtemp
=========================== ===============
tempfile.TemporaryFile should be used whenever possible. Besides creating
temporary files safely it also hides the file and cleans up the file
automatically.
Incorrect
~~~~~~~~~
Creating temporary files with predictable paths leaves them open to time of
check, time of use attacks (TOCTOU). Given the following code snippet an
attacker might pre-emptively place a file at the specified location.
.. code:: python
import os
import tempfile
# This will most certainly put you at risk
tmp = os.path.join(tempfile.gettempdir(), filename)
if not os.path.exists(tmp):
with open(tmp, "w") file:
file.write("defaults")
There is also an insecure method within the Python standard library that cannot
be used in a secure way to create temporary file creation.
.. code:: python
import os
import tempfile
open(tempfile.mktemp(), "w")
Finally there are many ways we could try to create a secure filename that will
not be secure and is easily predictable.
.. code:: python
filename = "{}/{}.tmp".format(tempfile.gettempdir(), os.getpid())
open(filename, "w")
Correct
~~~~~~~
The Python standard library provides a number of secure ways to create
temporary files and directories. The following are examples of how you can use
them.
Creating files:
.. code:: python
import os
import tempfile
# Use the TemporaryFile context manager for easy clean-up
with tempfile.TemporaryFile() as tmp:
# Do stuff with tmp
tmp.write('stuff')
# Clean up a NamedTemporaryFile on your own
# delete=True means the file will be deleted on close
tmp = tempfile.NamedTemporaryFile(delete=True)
try:
# do stuff with temp
tmp.write('stuff')
finally:
tmp.close() # deletes the file
# Handle opening the file yourself. This makes clean-up
# more complex as you must watch out for exceptions
fd, path = tempfile.mkstemp()
try:
with os.fdopen(fd, 'w') as tmp:
# do stuff with temp file
tmp.write('stuff')
finally:
os.remove(path)
We can also safely create a temporary directory and create temporary files
inside it. We need to set the umask before creating the file to ensure the
permissions on the file only allow the creator read and write access.
.. code:: python
import os
import tempfile
tmpdir = tempfile.mkdtemp()
predictable_filename = 'myfile'
# Ensure the file is read/write by the creator only
saved_umask = os.umask(0077)
path = os.path.join(tmpdir, predictable_filename)
print path
try:
with open(path, "w") as tmp:
tmp.write("secrets!")
except IOError as e:
print 'IOError'
else:
os.remove(path)
finally:
os.umask(saved_umask)
os.rmdir(tmpdir)
Consequences
~~~~~~~~~~~~
- The program can be tricked into performing file actions against the
wrong file or using a malicious file instead of the expected
temporary
file
References
~~~~~~~~~~
- `Temporary File - CERT Secure Coding
Standards <https://www.securecoding.cert.org/confluence/download/attachments/3524/07.5+Temporary+Files+v2.pdf>`__
- `FIO21-C. Do not create temporary files in shared
directories <https://www.securecoding.cert.org/confluence/display/seccode/FIO21-C.+Do+not+create+temporary+files+in+shared+directories>`__
- `FIO03-J. Remove temporary files before
termination <https://www.securecoding.cert.org/confluence/display/java/FIO03-J.+Remove+temporary+files+before+termination>`__
- `CWE-377: Insecure Temporary
File <http://cwe.mitre.org/data/definitions/377.html>`__
- `CWE-379: Creation of Temporary File in Directory with Incorrect
Permissions <http://cwe.mitre.org/data/definitions/379.html>`__
- `CWE-459: Incomplete
Cleanup <http://cwe.mitre.org/data/definitions/459.html>`__
- `Python tempfile <https://docs.python.org/2/library/tempfile.html>`__

View File

@ -0,0 +1,60 @@
.. :Copyright: 2015, OpenStack Foundation
.. :License: This work is licensed under a Creative Commons
Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Validate certificates on HTTPS connections to avoid man-in-the-middle attacks
=============================================================================
When developing a module that makes secure HTTPS connections, use a library
that verifies certificates. Many such libraries also provide an option to ignore
certificate verification failures. These options should be exposed to the
OpenStack deployer to choose their level of risk.
Although the title of this guideline calls out HTTPS, verifying the identity of
the hosts you are connecting to applies to most protocols (SSH, LDAPS, etc).
Incorrect
~~~~~~~~~
.. code:: python
import requests
requests.get('https://www.openstack.org/', verify=False)
The example above uses verify=False to bypass the check of the certificate
received against those in the CA trust store.
It is important to note that modules such as httplib within the Python
standard library *did not* verify certificate chains until it was fixed
in 2.7.9 release. For more specifics about the modules affected refer
to `CVE-2014-9365 <https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-9365>`.
Correct
~~~~~~~
.. code:: python
import requests
requests.get('https://www.openstack.org/', verify=CONF.ca_file)
The example above uses the variable CONF.ca\_file to store the location of the
CA trust store, which is used to confirm that the certificate received is from
a trusted authority.
Consequences
~~~~~~~~~~~~
A main-in-the-middle (MITM) attack can allow a party to monitor, copy, and
manipulate all data transferred between the parties. The impact of this depends
on what data is sent. Customer satisfaction survey data will be less valuable
than banking passwords and account information.
- `OSSA-2014-005 <http://security.openstack.org/ossa/OSSA-2014-005.html>`__
References
~~~~~~~~~~
- `OSSN-0033 <https://wiki.openstack.org/wiki/OSSN/OSSN-0033>`__
- https://wiki.openstack.org/wiki/SecureClientConnections

View File

@ -109,3 +109,17 @@ See the `Security Teams`_ wiki page for the full list of security-oriented
teams you can join.
.. _Security Teams: http://wiki.openstack.org/SecurityTeams
OpenStack secure development guidelines
---------------------------------------
The OpenStack security team have collaboratively developed this set of
guidelines and best practices to help avoid common mistakes that lead
to security vulnerabilities within the OpenStack platform.
.. toctree::
:maxdepth: 1
:glob:
./guidelines/*