The docs were recommending a bad config (see related change).
Related-Change: I21e38884a2aefbb94b76c76deccd815f01db7362
Change-Id: Idca96a39f552083b55dc5a86d14ee4357777d6fe
Previously, we hardcoded a v2.0 path to use when validating requests
against Keystone. Now, the version to use may be specified in a new
auth_version config option.
In the future, we may want to implement some form of version
discovery, but that will be complicated by:
* trying to determine whether the S3 extension is actually enabled for a
given version (particularly since the extensions endpoint [1] seems to
have gone away in v3), and
* needing to be able to perform this detection as part of the
client-request cycle, in case Keystone is down when the proxy is
coming up.
[1] http://developer.openstack.org/api-ref/identity/v2/index.html?expanded=list-extensions-detail
Change-Id: I3a9c702123fd1b76d45214a89ec0583caf3719f0
And also swift3 middleare log name can be configured as well as
s3token middleware.
Change-Id: I882208579e8df89ebd0033033e1e035c370b80a6
Related-Change: be22c9d2fd
Swift has removed the minimum segment size setting for multipart upload.
To make it compatible with S3, we are re-implementing it in swift3.
Each upload part except the last should be more than the minimum segment
size (default 5MB, same as the S3 multipart upload chunk size). When a
"complete multipart upload" request comes, check all the parts and
return a EntityTooSmall error if they are smaller than the minimum
segment size.
Change-Id: I883b25ab3d43d330ffc60fa2c3ade7a6b5802cee
Otherwise, requests may wait forever for a response.
Now, we will wait at most 10 seconds by default, and allow operators to
adjust that to between 0 and 60 seconds.
This option closely mirrors the http_connect_timeout option in
Keystone's authtoken middleware.
Change-Id: I43fe784551abe6de790c781d0addfa25519a1f55
New algorithm that supports s3v4 was added.
What I did in this patch in detail:
- Implements v4 related code into mix-in class to provide some methods
for authentication algorithms (e.g. string_to_sign)
- S3Timestamp everywhere. Old code take a lot of complicated timestamp
translation from/to datetime, time, date header format (str). This
patch gathers the translation into "timestamp" property method which
should be actually handled in the validatation.
- Run functional tests for both v2/v4 authentication in the same
environment at the same time which shows evidence that we have complete
backword compatibilities and we can adopt v4 w/o anything broken.
*Bonus*
- Fix some minger bugs for singed urls (almostly expired timestamp),
for header/query mixture and for unit test case mistake.
The reason I implemented this from Andrey's original patch is the
signature v4 stuff is too complicated if we mixes the process/routine
into same class because of a bunch of if/elif/else statements for header
handling. (e.g. if 'X-Amz-Date' in req.headers) Note that it is not his
issue, just AWS is getting complicated algorithms. However, for
maintainansibility, we need more clear code to find easily which statement
is supported on v2/v4 to prevent merge buggy code into master. That is why
I tried to do this. Hopefully this code fits the original author's intention.
NOTE for operators:
- Signature V4 is supported only for keystone auth.
- Set the same value of "region" configuration in keystone to "location" in
swift3 conf file to enable SigV4.
- Sigv2 and SigV4 can be used at the same cluster configuration.
- This stuff has been supported since Keystone 9.0.0.0b1. (We probably
need to bump the minimum version for keystone in requirements)
Change-Id: I386abd4ead40f55855657e354fd8ef3fd0d13aa7
Co-Authored-By: Andrey Pavlov <andrey-mp@yandex.ru>
Closes-Bug: #1411078
This patch moves (as discussed at the Newton design summit) the
s3_token middleware from keystonemiddleware to swift3. The git
history is not included based upon the agreement between the
Keystone team and the Swift3 team.
This is based on s3_token.py from openstack/keystonemiddleware@234913e
Note that the egg entrypoint has changed from "s3_token" to "s3token"
for consistency between entrypoint and recommended pipeline names.
Additionally, keystone functional tests now use the in-tree s3token
middleware.
Upgrade Impact
==============
Deployers currently using keystone for authentication should change
their s3token filter definition to use the middleware provided by swift3
rather than the one provided by keystonemiddleware. Note that
keystonemiddleware will still need to be installed, and its auth_token
middleware configured.
UpgradeImpact
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Change-Id: I1c0e68a5276dd3dee97d7569e477c784db8ccb8a
Specifically, from Swift:
proxy-server: Adding required filter dlo to pipeline at position 7
proxy-server: Adding required filter gatekeeper to pipeline at
position 1
proxy-server: Pipeline was modified. New pipeline is "catch_errors
gatekeeper proxy-logging cache swift3 s3token authtoken keystoneauth
dlo bulk slo proxy-logging proxy-server".
proxy-server: Starting Keystone auth_token middleware
proxy-server: Configuring admin URI using auth fragments. This is
deprecated, use 'identity_uri' instead.
And from Keystone:
2016-03-18 20:18:19.844 19052 WARNING oslo_config.cfg [-] Option
"policy_file" from group "DEFAULT" is deprecated. Use option
"policy_file" from group "oslo_policy".
While we're at it, remove some unused filter config options to reduce
confusion.
Change-Id: I08e76b3bfcc4b59121b7a0d5fedf1f9629d8fb25
Per AWS's docs, Cache-Control and Expires may be set on upload [1]. On
download, the same headers would then be included in the response.
Previously, these would not be included in Swift3 responses; now they
will be.
Additionally, several headers may be set on download via query
parameters. This functionality already exists, but AWS's docs specify
that this is "a subset of the headers that Amazon S3 accepts when you
create an object" [2], so we should ensure Content-Language and
Content-Disposition are transcribed as well.
Finally, there is at least one undocumented header, X-Robots-Tag, which
AWS allows to be set. At the very least, Boto [3] knows to try.
Note that setting all of these headers already worked in Swift3, but
requires that you update the allowed_headers option in the
[app:object-server] section of object-server.conf. The conf used for
functional tests has been so updated.
[1] http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html#RESTObjectPUT-requests-request-headers
[2] http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html#RESTObjectGET-requests-request-parameters
[3] https://github.com/boto/boto/commit/0c11983
Change-Id: I22001c6fd14033a9f13c36a3e05fdc678c75654f
When deleting an object created via Multipart Upload, delete both the
manifest file and the segments by adding "multipart-manifest=delete"
to the query string.
This requires an additional HEAD before each DELETE, which adds a good
bit of overhead. There is a new config option, "allow_multipart_uploads"
which operators may turn off to avoid this overhead if they know their
use-case does not require Multipart Uploads.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Change-Id: Ie1889750b0e6fbe48af0da40596f09ed504b9099
Closes-Bug: #1420144
Previously, the gate tests used a version of keystone that didn't have a
cap on oslo.config. This led to gate checks failing because they pulled
in a version of oslo.config that broke backward compatibility. Even if
we updated within icehouse, the last oslo.messaging does not include a
cap on oslo.utils, which *also* broke backwards compatibility.
Now, we'll install the most recently released version of keystone,
updating some testing and example config along the way.
Change-Id: Id357975413094bab751c5b8549d9201e9232af7f
Default to enforcing, as "[w]hile the US Standard region currently
allows non-compliant DNS bucket naming, we are moving to the same
DNS-compliant bucket naming convention for the US Standard region in the
coming months."
While we're at it, tighten the allowable characters:
* In DNS-compliant mode, only allow a-z, 0-9, hyphens, and periods.
* Otherwise, also allow A-Z and underscore.
Change-Id: Ida9b6c3260eb851b309c0f58f757e5376c66c0e0
I made modifications to output access log of subrequests from swift3 to
proxy-server.
Currently, only access log from client to swift3 is output.
I think that we should output proxy-server from swift3 to analyze it quickly
when a problem occurred.
Change-Id: If8fc17ec93eb9ca27446f23e4d1992e9d354437b
Add check owner process of each buckets in the bucket list for GET
service, when s3_acl and check_bucket_owner are true.
Current Swift3 returns all bucket, but it should return only buckets
owner are request user.
Change-Id: I77886e59d02aaa86659a8e15da35b23e2cbf402b
Change the default maximum value of max-parts in order to match at
S3 specification.
And change the variable name from max_part to max_part_listing in
order to clarify the role.
Plese check following document for specification of List Parts.
- http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html
Change-Id: I51bb07ff7cd662c9dbc331b10c5f247b33c882ee
Add max_upload_part_num to config for changing the maximum value of
partNumber.
Currently max_parts and DEFAULT_MAX_PARTS for List Parts was used in common.
However, according to following document, the maximum value of partNumber
is 10000 in S3.
- http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html
Change-Id: I5be49760e00d611a2fe988a4821fc83dd029557b
`pipeline_check` is an option to disable the exception when the order of
middlewares in pipeline is incorrect. The pipeline check is enabled in
default.
Disable pipeline check that makes swift supported multiple authentications.
While it's enabled with keystoneauth in pipeline, swift3 force to check
s3token, authtoken must be in pipeline. And that will make all swift3
requests authenticate with keystoneauth only, can't bypass keystoneauth to
other 3party authenticate middlewares. So, turn pipeline_check off is a
way to do that.
Usage:
[filter:swift3]
pipeline_check = False
Change-Id: Ic3f3fc8454f13be587a4903bae02274aa999324c
Closes-Bug: #1407594
I've fixed ToDo in qeuries of List Part.
encoding-type: Requests Amazon S3 to encode the response and specifies the encoding method to use.
max-parts: Sets the maximum number of parts to return in the response body.
part-number-marker: Specifies the part after which listing should begin. Only parts with higher part numbers will be listed.
For details, refer to the following.
http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html
Change-Id: I1a3b3a0aba188fcb928e89e78ad8859dbecca2e8
Currently, Swift3 sets and retrieves Swift ACLs for S3 ACL requests. However,
S3 ACL is too different from Swift ACL to implement the below reference.
http://docs.aws.amazon.com/AmazonS3/latest/dev/S3_ACLs_UsingACLs.html
With this patch, Swift3 uses its own metadata for ACL
(e.g. X-Container-Sysmeta-Swift3-Acl) to achieve the best S3 compatibility.
This patch only embeds the S3 ACL into the Swift metadata. The swift3
middleware does not use it for S3 requests yet; this will be addressed later.
Change-Id: I4522910b6b3a0066f24caa98727fdeb85837e42b
This introduces a config value to limit the maximum number of objects we can
delete with a multi delete request. The default value is same as S3's one.
Change-Id: I77d404e43ba8532cbf81c680dcb82ec69b8f31cc
This patch adds a basic support for S3 Multipart Upload APIs based on Swift3
static large objects. The s3multi middleware is no longer necessary.
There are still many TODO items. They are commented in the source code.
Change-Id: Icda01dc31de43e6fe36144921fa1bd276b76e5ea
With this patch, users can limit the maximum number of objects returned in the
GET Bucket response. The default is 1000, which is the same value as AWS S3
uses.
Change-Id: I70f9aece8fa3e2d14ed02d831c560bc3d4feb172
This makes it clear that what kinds of values are allowed for Swift3 config.
The sample config file is also added to document those parameters.
The default value of 'location' is set to 'US' like AWS S3.
Change-Id: Ifa18346208863cffb05862e062043ecdb35341b7