Retire project

Leave README around for those that follow.

http://lists.openstack.org/pipermail/openstack-discuss/2019-February/003186.html
http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000057.html

Change-Id: I4f15418fe906dae449dcc7484baa611cb567c640
This commit is contained in:
Ryan Beisner 2018-11-01 20:49:50 -05:00
parent 355467c1ec
commit e006377f21
No known key found for this signature in database
GPG Key ID: 952BACDC1C1A05FB
76 changed files with 7 additions and 7599 deletions

10
.gitignore vendored
View File

@ -1,10 +0,0 @@
bin
builds
.idea
.coverage
.testrepository
.tox
*.sw[nop]
.idea
*.pyc
func-results.json

View File

@ -1,8 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ ./unit_tests $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,4 +1,3 @@
- project:
templates:
- python-charm-jobs
- openstack-python35-jobs
- noop-jobs

203
LICENSE
View File

@ -1,203 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

6
README.md Normal file
View File

@ -0,0 +1,6 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".

View File

@ -1,5 +0,0 @@
# This file is used to trigger rebuilds
# when dependencies of the charm change,
# but nothing in the charm needs to.
# simply change the uuid to something new
9c6303e6-8adf-11e6-b9bc-a7f256e1a5e3

View File

@ -1,4 +0,0 @@
# Requirements to build the charm
charm-tools
simplejson
flake8

View File

@ -1,172 +0,0 @@
# Gluster charm
GlusterFS is an open source, distributed file system capable of scaling
to several petabytes (actually, 72 brontobytes!) and handling thousands
of clients. GlusterFS clusters together storage building blocks over
Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory
resources and managing data in a single global namespace. GlusterFS
is based on a stackable user space design and can deliver exceptional
performance for diverse workloads.
# Usage
The gluster charm has defaults in the config.yaml that you will want to
change for production.
Please note that volume_name, cluster_type, and replication_level are
immutable options. Changing them post deployment will have no effect.
This charm makes use of
[juju storage](https://jujucharms.com/docs/1.25/storage).
Please read the docs to learn about adding block storage to your units.
volume_name:
Whatever name you would like to call your gluster volume.
cluster_type:
The default here is Replicate but you can also set it to
* Arbiter
* Distribute
* Stripe
* Replicate
* Striped-Replicate
* Disperse
* Distributed-Stripe
* Distributed-Replicate
* Distributed-Striped-Replicate
* Distributed-Disperse
replication_level:
The default here is 3
If you don't know what any of these mean don't worry about it.
The defaults are sane.
# Actions
This charm several actions to help manage your Gluster cluster.
1. Creating volume quotes. Example:
`juju action do --unit gluster/0 create-volume-quota volume=test usage-limit=1000MB`
2. Deleting volume quotas. Example:
`juju action do --unit gluster/0 delete-volume-quota volume=test`
3. Listing the current volume quotas. Example:
`juju action do --unit gluster/0 list-volume-quotas volume=test`
4. Setting volume options. This can be used to set several volume options at
once. Example:
`juju action do --unit gluster/0 set-volume-options volume=test performance-cache-size=1GB performance-write-behind-window-size=1MB`
# Building from Source
# Configure
Create a config.yaml file to set any options you would like to change from the defaults.
# Deploy
This charm requires juju storage. It requires at least 1 block device.
For more information please check out the
[docs](https://jujucharms.com/docs/stable/charms-storage)
Example EC2 deployment on Juju 2.1:
juju deploy cs:~xfactor973/xenial/gluster-3 -n 3 --config=~/gluster.yaml --storage brick=ebs,10G,2
To scale out the service use this command:
juju add-unit gluster
(keep adding units to keep adding more bricks and storage)
# Scale Out
Note that during scale out operation if your cluster has existing files on
there they will not be migrated to the new bricks until a gluster volume
rebalance start operation is performed. This operation can slow client traffic
so it is left up to the administrator to perform at the appropriate time.
# Rolling Upgrades
The config.yaml source option is used to kick off a rolling upgrade of your
cluster. The current behavior is to install the new packages on the server
and upgrade it one by one. A UUID sorted order is used to define the
upgrade order. Please note that replica 3 is required to use rolling
upgrades. With replica 2 it's possible to have split brain issues.
# Testing
For a simple test deploy 4 gluster units like so:
juju deploy gluster -n 4 --config=~/gluster.yaml --storage brick=local,10G
Once the status is started the charm will bring both units together into a
cluster and create a volume. You will know the cluster is ready when you see
a status of active.
Now you can mount the exported GlusterFS filesystem with either fuse or NFS.
Fuse has the advantage of knowing how to talk to all replicas in your Gluster
cluster so it will not need other high availablity software. NFSv3 is a
point to point so it will need something like virtual IP's, DNS round
robin or something else to ensure availability if a unit should die or
go away suddenly. Install the glusterfs-client package on your host.
You can reference the ./hooks/install file to show you how to install
the glusterfs packages.
On your juju host you can mount Gluster with fuse like so:
mount -t glusterfs <ip or hostname of unit>:/<volume_name> mount_point/
## High Availability
There's 3 ways you can achieve high availability with Gluster.
1. The first an easiest method is to simply use the glusterfs fuse mount on all
clients. This has the advantage of knowing where all servers in the cluster
are at and will reconnect as needed and failover gracefully.
2. Using virtual ip addresses with a DNS round robin A record. This solution
applies to NFSv3. This method is more complicated but has the advantage of
being usable on clients that only support NFSv3. NFSv3 is stateless and
this can be used to your advantage by floating virtual ip addresses that
failover quickly. To use this setting please set the virtual_ip_addresses
config.yaml setting after reading the usage.
3. Using the
[Gluster coreutils](https://github.com/gluster/glusterfs-coreutils).
If you do not need a mount point then this is a viable option.
glusterfs-coreutils provides a set of basic utilities such as cat, cp, flock,
ls, mkdir, rm, stat and tail that are implemented specifically using the
GlusterFS API commonly known as libgfapi. These utilities can be used either
inside a gluster remote shell or as standalone commands with 'gf' prepended to
their respective base names. Example usage is shown here:
[Docs](https://gluster.readthedocs.io/en/latest/Administrator%20Guide/GlusterFS%20Coreutils/)
## MultiTenancy
Gluster provides a few easy ways to have multiple clients in the same volume
without them knowing about one another.
1. Deep Mounting. Gluster NFS supports deep mounting which allows the sysadmin
to create a top level directory for each client. Then instead of mounting the
volume you mount the volume + the directory name. Now the client only sees
their files. This doesn't stop a malacious client from remounting the top
level directory.
* This can be combined with [posix acl's](https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Access%20Control%20Lists/) if your tenants are not trustworthy.
* Another option is combining with [Netgroups](https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Export%20And%20Netgroup%20Authentication/).
This feature allows users to restrict access specific IPs
(exports authentication) or a netgroup (netgroups authentication),
or a combination of both for both Gluster volumes and subdirectories within
Gluster volumes.
## Filesystem Support:
The charm supports several filesystems currently. Btrfs, Ext4, Xfs and ZFS. The
default filesystem can be set in the config.yaml. The charm currently defaults
to XFS but ZFS would likely be a safe choice and enable advanced functionality
such as bcache backed gluster bricks.
**Note: The ZFS filesystem requires Ubuntu 16.04 or greater**
## Notes:
If you're using containers to test Gluster you might need to edit
/etc/default/lxc-net and read the last section about if you want lxcbr0's
dnsmasq to resolve the .lxc domain
Now to show that your cluster can handle failure you can:
juju destroy-machine n;
This will remove one of the units from your cluster and simulate a hard
failure. List your files on the mount point to show that they are
still available.
# Reference
For more information about Gluster and operation of a cluster please
see: https://gluster.readthedocs.org/en/latest/
For more immediate and interactive help please join
IRC channel #gluster on Freenode.
Gluster also has a users mailing list:
https://www.gluster.org/mailman/listinfo/gluster-users
For bugs concerning the Juju charm please file them on
[Github](https://github.com/cholcombe973/gluster-charm/tree/master)

View File

@ -1,359 +0,0 @@
create-volume-quota:
description: |
Directory quotas in GlusterFS allows you to set limits on usage of the disk
space by volumes.
params:
volume:
type: string
description: The volume to enable this quota on
usage-limit:
type: integer
description: The byte limit of the quota for this volume.
path:
type: string
description: The path to limit the usage on. Defaults to /
default: "/"
required: [volume, usage-limit]
additionalProperties: false
delete-volume-quota:
description: |
Directory quotas in GlusterFS allows you to set limits on usage of the disk
space by volumes.
params:
volume:
type: string
description: The volume to disable this quota on
path:
type: string
description: The path to remove the limit on. Defaults to /
default: "/"
required: [volume]
additionalProperties: false
list-volume-quotas:
description: |
Directory quotas in GlusterFS allows you to set limits on usage of the disk
space by volumes.
params:
volume:
type: string
description: The volume to list quotas on
required: [volume]
additionalProperties: false
rebalance-volume:
description: |
After expanding or shrinking a volume you need to rebalance the data
among the servers. New directories created after expanding or
shrinking of the volume will be evenly distributed automatically.
For all the existing directories, the distribution can be fixed by
rebalancing the layout and/or data. This action should be run
in a maintenance window because client IO will be impacted.
params:
volume:
type: string
description: The volume to rebalance
required: [volume]
additionalProperties: false
set-bitrot-throttle:
description: |
The bitrot detection service aggression can be adjusted.
params:
volume:
type: string
description: The volume to set the option on
throttle:
type: string
enum: [lazy,normal,aggressive]
description: Adjusts the rate at which objects are verified
required: [volume, throttle]
additionalProperties: false
set-bitrot-scan-frequency:
description: |
The bitrot detection service scanning frequency can be adjusted.
params:
volume:
type: string
description: The volume to set the option on
frequency:
type: string
enum: [hourly,daily,weekly,biweekly,monthly]
description: How often the bitrot scanner should run.
required: [volume, frequency]
additionalProperties: false
pause-bitrot-scan:
description: |
Pause bitrot detection
params:
volume:
type: string
description: The volume to pause scannign on
required: [volume]
additionalProperties: false
resume-bitrot-scan:
description: |
Resume bitrot detection
params:
volume:
type: string
description: The volume to resume scanning on
required: [volume]
additionalProperties: false
disable-bitrot-scan:
description: |
Disable bitrot detection
params:
volume:
type: string
description: The volume to disable scanning on
required: [volume]
additionalProperties: false
enable-bitrot-scan:
description: |
Enable bitrot detection
params:
volume:
type: string
description: The volume to enable scanning on
required: [volume]
additionalProperties: false
set-volume-options:
description: |
You can tune volume options, as needed, while the cluster is online
and available.
params:
volume:
type: string
description: The volume to set the option on
auth-allow:
type: string
description: |
IP addresses of the clients which should be allowed to access the
volume. Valid IP address which includes wild card patterns including *,
such as 192.168.1.*
auth-reject:
type: string
description: |
IP addresses of the clients which should be denied to access the volume.
Valid IP address which includes wild card patterns including *,
such as 192.168.1.*
cluster-self-heal-window-size:
type: integer
description: |
Specifies the maximum number of blocks per file on which self-heal
would happen simultaneously.
minimum: 0
maximum: 1025
cluster-data-self-heal-algorithm:
description: |
Specifies the type of self-heal. If you set the option as "full",
the entire file is copied from source to destinations. If the option
is set to "diff" the file blocks that are not in sync are copied to
destinations. Reset uses a heuristic model. If the file does not exist
on one of the subvolumes, or a zero-byte file exists (created by
entry self-heal) the entire content has to be copied anyway, so there
is no benefit from using the "diff" algorithm. If the file size is
about the same as page size, the entire file can be read and written
with a few operations, which will be faster than "diff" which has to
read checksums and then read and write.
type: string
enum: [full,diff,reset]
cluster-min-free-disk:
type: integer
description: |
Specifies the percentage of disk space that must be kept free.
Might be useful for non-uniform bricks
minimum: 0
maximum: 100
cluster-stripe-block-size:
type: integer
description: |
Specifies the size of the stripe unit that will be read from or written
to.
cluster-self-heal-daemon:
type: boolean
description: |
Allows you to turn-off proactive self-heal on replicated
cluster-ensure-durability:
type: boolean
description: |
This option makes sure the data/metadata is durable across abrupt
shutdown of the brick.
diagnostics-brick-log-level:
type: string
description: |
Changes the log-level of the bricks.
enum: [debug,warning,error,none,trace,critical]
diagnostics-client-log-level:
type: string
description: |
Changes the log-level of the clients.
enum: [debug,warning,error,none,trace,critical]
diagnostics-latency-measurement:
type: boolean
description: |
Statistics related to the latency of each operation would be tracked.
diagnostics-dump-fd-stats:
type: boolean
description: |
Statistics related to file-operations would be tracked.
features-read-only:
type: boolean
description: |
Enables you to mount the entire volume as read-only for all the
clients (including NFS clients) accessing it.
features-lock-heal:
type: boolean
description: |
Enables self-healing of locks when the network disconnects.
features-quota-timeout:
type: integer
description: |
For performance reasons, quota caches the directory sizes on client.
You can set timeout indicating the maximum duration of directory sizes
in cache, from the time they are populated, during which they are
considered valid
minimum: 0
maximum: 3600
geo-replication-indexing:
type: boolean
description: |
Use this option to automatically sync the changes in the filesystem
from Master to Slave.
nfs-enable-ino32:
type: boolean
description: |
For 32-bit nfs clients or applications that do not support 64-bit
inode numbers or large files, use this option from the CLI to make
Gluster NFS return 32-bit inode numbers instead of 64-bit inode numbers.
nfs-volume-access:
type: string
description: |
Set the access type for the specified sub-volume.
enum: [read-write,read-only]
nfs-trusted-write:
type: boolean
description: |
If there is an UNSTABLE write from the client, STABLE flag will be
returned to force the client to not send a COMMIT request. In some
environments, combined with a replicated GlusterFS setup, this option
can improve write performance. This flag allows users to trust Gluster
replication logic to sync data to the disks and recover when required.
COMMIT requests if received will be handled in a default manner by
fsyncing. STABLE writes are still handled in a sync manner.
nfs-trusted-sync:
type: boolean
description: |
All writes and COMMIT requests are treated as async. This implies that
no write requests are guaranteed to be on server disks when the write
reply is received at the NFS client. Trusted sync includes
trusted-write behavior.
nfs-export-dir:
type: string
description: |
This option can be used to export specified comma separated
subdirectories in the volume. The path must be an absolute path.
Along with path allowed list of IPs/hostname can be associated with
each subdirectory. If provided connection will allowed only from these
IPs. Format: \<dir>[(hostspec[hostspec...])][,...]. Where hostspec can
be an IP address, hostname or an IP range in CIDR notation. Note: Care
must be taken while configuring this option as invalid entries and/or
unreachable DNS servers can introduce unwanted delay in all the mount
calls.
nfs-export-volumes:
type: boolean
description: |
Enable/Disable exporting entire volumes, instead if used in
conjunction with nfs3.export-dir, can allow setting up only
subdirectories as exports.
nfs-rpc-auth-unix:
type: boolean
description: |
Enable/Disable the AUTH_UNIX authentication type. This option is
enabled by default for better interoperability. However, you can
disable it if required.
nfs-rpc-auth-null:
type: boolean
description: |
Enable/Disable the AUTH_NULL authentication type. It is not recommended
to change the default value for this option.
nfs-ports-insecure:
type: boolean
description: |
Allow client connections from unprivileged ports. By default only
privileged ports are allowed. This is a global setting in case insecure
ports are to be enabled for all exports using a single option.
nfs-addr-namelookup:
type: boolean
description: |
Turn-off name lookup for incoming client connections using this option.
In some setups, the name server can take too long to reply to DNS
queries resulting in timeouts of mount requests. Use this option to
turn off name lookups during address authentication. Note, turning this
off will prevent you from using hostnames in rpc-auth.addr.* filters.
nfs-register-with-portmap:
type: boolean
description: |
For systems that need to run multiple NFS servers, you need to prevent
more than one from registering with portmap service. Use this option to
turn off portmap registration for Gluster NFS.
nfs-disable:
type: boolean
description: |
Turn-off volume being exported by NFS
performance-write-behind-window-size:
type: integer
description: |
Size of the per-file write-behind buffer.
performance-io-thread-count:
type: integer
description: |
The number of threads in IO threads translator.
minimum: 0
maximum: 65
performance-flush-behind:
type: boolean
description: |
If this option is set ON, instructs write-behind translator to perform
flush in background, by returning success (or any errors, if any of
previous writes were failed) to application even before flush is sent
to backend filesystem.
performance-cache-max-file-size:
type: integer
description: |
Sets the maximum file size cached by the io-cache translator. Can use
the normal size descriptors of KB, MB, GB,TB or PB (for example, 6GB).
Maximum size uint64.
performance-cache-min-file-size:
type: integer
description: |
Sets the minimum file size cached by the io-cache translator. Values
same as "max" above
performance-cache-refresh-timeout:
type: integer
description: |
The cached data for a file will be retained till 'cache-refresh-timeout'
seconds, after which data re-validation is performed.
minimum: 0
maximum: 61
performance-cache-size:
type: integer
description: |
Size of the read cache in bytes
server-allow-insecure:
type: boolean
description: |
Allow client connections from unprivileged ports. By default only
privileged ports are allowed. This is a global setting in case insecure
ports are to be enabled for all exports using a single option.
server-grace-timeout:
type: integer
description: |
Specifies the duration for the lock state to be maintained on the server
after a network disconnection.
minimum: 10
maximum: 1800
server-statedump-path:
type: string
description: |
Location of the state dump file.
required: [volume]
additionalProperties: false

View File

@ -1,13 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1 +0,0 @@
../reactive/actions.py

View File

@ -1,173 +0,0 @@
options:
volume_name:
type: string
default: test
description: |
The name of the Gluster volume to create. This will also serve as the name
of the mount point. Example: mount -t glusterfs server1:/test
brick_devices:
type: string
default:
description: |
Space separated device list to format and set up as brick volumes.
These devices are the range of devices that will be checked for and
used across all service units, in addition to any volumes attached
via the --storage flag during deployment.
raid_stripe_width:
type: int
default:
description: |
If a raid array is being used as the block device please enter the
stripe width here so that the filesystem can be aligned properly
at creation time.
For xfs this is generally # of data disks (don't count parity disks).
Note: if not using a raid array this should be left blank.
This setting has no effect for Btrfs of Zfs
Both raid_stripe_width and raid_stripe_unit must be specified together.
raid_stripe_unit:
type: int
default:
description: |
If a raid array is being used as the block device please enter the
stripe unit here so that the filesystem can be aligned properly at
creation time. Note: if not using a raid array this should be left blank.
For ext4 this corresponds to stride.
Also this should be a power of 2. Otherwise mkfs will fail.
Note: This setting has no effect for Btrfs of Zfs
Both raid_stripe_width and raid_stripe_unit must be specified together.
inode_size:
type: int
default: 512
description: |
Inode size can be set at brick filesystem creation time. This is generally
helpful in cases where metadata will be split into multiple iops.
disk_elevator:
type: string
default: deadline
description: |
The disk elevator or I/O scheduler is used to determine how I/O operations
are handled by the kernel on a per disk level. If you don't know what
this means or is used for than leaving the default is a safe choice. I/O
intensive applications like Gluster usually benefit from using the deadline
scheduler over CFQ. If you have a hardware raid card or a solid state
drive then setting noop here could improve your performance.
The quick high level summary is: Deadline is primarily concerned with
latency. Noop is primarily concerned with throughput.
Options include:
cfq
deadline
noop
defragmentation_interval:
type: string
default: "@weekly"
description: |
XFS and other filesystems fragment over time and this can lead to
degraded performance for your cluster. This setting which takes any
valid crontab period will setup a defrag schedule. Be aware that this
can generate significant IO on the cluster so choose a low activity
period. Zfs does not have an online defrag option so this
option mainly is concerned with Btrfs, Ext4 or XFS.
ephemeral_unmount:
type: string
default:
description: |
Cloud instances provide ephermeral storage which is normally mounted
on /mnt.
Setting this option to the path of the ephemeral mountpoint will force
an unmount of the corresponding device so that it can be used as a brick
storage device. This is useful for testing purposes (cloud deployment
is not a typical use case).
cluster_type:
type: string
default: Distributed-Replicate
description: |
The type of volume to setup. DistributedAndReplicate is sufficient
for most use cases.
Other options include: Distribute,
Arbiter,
Stripe,
Striped-Replicate,
Disperse,
Distributed-Stripe,
Distributed-Replicate,
Distributed-Striped-Replicate,
Distributed-Disperse.
For more information about these cluster types please see here:
https://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Architecture/#types-of-volumes
replication_level:
type: int
default: 3
description: |
This sets how many replicas of the data should be stored in the cluster.
Generally 2 or 3 will be fine for almost all use cases. Greater than 3
could be useful for read heavy uses cases.
extra_level:
type: int
default: 1
description: |
For certain volume types
Arbiter,
Disperse,
Distributed-Disperse,
Distributed-Replicate,
Distributed-Striped-Replicate,
two values are needed. The replication level and a second number. That
second number should be specified here.
filesystem_type:
type: string
default: xfs
description: |
The filesystem type to use for each one of the bricks. Can be either
zfs, xfs, btrfs, or ext4. Note that zfs only works with ubuntu 16.04 or
newer. General testing has shown that xfs is the most performant
filesystem.
splitbrain_policy:
type: string
default: size
description: |
Split brain means that the cluster can not come to consensus on which
version of a file to serve to a client.
This option set automatic resolution to split-brains in replica volumes
Options include: ctime|mtime|size|majority. Set this to none to disable.
Example: Setting this to "size" will pick the largest size automatically
and delete the smaller size file. "majority" picks a file with identical
mtime and size in more than half the number of bricks in the replica.
bitrot_detection:
type: boolean
default: true
description: |
Gluster has a bitrot detection daemon that runs periodically. It
calculates checksums and repairs the data that doesn't match the replicas.
source:
type: string
default: ppa:gluster/glusterfs-3.10
description: |
Optional configuration to support use of additional sources such as:
- ppa:myteam/ppa
- cloud:trusty-proposed/kilo
- http://my.archive.com/ubuntu main
The last option should be used in conjunction with the key configuration
option. NOTE: Changing this configuration value after your cluster is
deployed will initiate a rolling upgrade of the servers one by one.
key:
type: string
default:
description: |
Key ID to import to the apt keyring to support use with arbitary source
configuration from outside of Launchpad archives or PPA's.
sysctl:
type: string
default: '{ vm.vfs_cache_pressure: 100, vm.swappiness: 1 }'
description: |
YAML-formatted associative array of sysctl key/value pairs to be set
persistently. By default we set pid_max, max_map_count and
threads-max to a high value to avoid problems with large numbers (>20)
of OSDs recovering. very large clusters should set those values even
higher (e.g. max for kernel.pid_max is 4194303). Example settings for
random and small file workloads:
'{ vm.dirty_ratio: 5, vm.dirty_background_ratio: 2 }'

View File

@ -1,16 +0,0 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0
Files: *
Copyright: 2017, Canonical Ltd.
License: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.

View File

@ -1,2 +0,0 @@
includes: ['layer:basic', 'interface:gluster-peer']
repo: https://git.openstack.org/openstack/charm-glusterfs

View File

@ -1,13 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -1 +0,0 @@
__author__ = 'Chris Holcombe <chris.holcombe@canonical.com>'

View File

@ -1,13 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -1,31 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import apt
from result import Err, Ok, Result
def get_candidate_package_version(package_name: str) -> Result:
"""
Ask apt-cache for the new candidate package that is available
:param package_name: The package to check for an upgrade
:return: Ok with the new candidate version or Err in case the candidate
was not found
"""
cache = apt.Cache()
try:
version = cache[package_name].candidate.version
return Ok(version)
except KeyError:
return Err("Unable to find candidate upgrade package for: {}".format(
package_name))

View File

@ -1,686 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import re
import subprocess
import tempfile
import typing
import uuid
from enum import Enum
from typing import List, Optional, Tuple
import pyudev
from charmhelpers.core import hookenv
from charmhelpers.core.hookenv import log, storage_get, storage_list, ERROR
from charmhelpers.core.unitdata import kv
from charmhelpers.fetch import apt_install
from pyudev import Context
from result import Err, Ok, Result
from .shellscript import parse
config = hookenv.config
class FilesystemType(Enum):
Btrfs = "btrfs"
Ext2 = "ext2"
Ext3 = "ext3"
Ext4 = "ext4"
Xfs = "xfs"
Zfs = "zfs"
Unknown = "unknown"
def __str__(self):
return "{}".format(self.value)
# Formats a block device at Path p with XFS
class MetadataProfile(Enum):
Raid0 = "raid0"
Raid1 = "raid1"
Raid5 = "raid5"
Raid6 = "raid6"
Raid10 = "raid10"
Single = "single"
Dup = "dup"
def __str__(self):
return "{}".format(self.value)
class MediaType(Enum):
SolidState = 0
Rotational = 1
Loopback = 2
Unknown = 3
class Device(object):
def __init__(self, id: Optional[uuid.UUID], name: str,
media_type: MediaType,
capacity: int, fs_type: FilesystemType) -> None:
"""
This will be used to make intelligent decisions about setting up
the device
:param id:
:param name:
:param media_type:
:param capacity:
:param fs_type:
"""
self.id = id
self.name = name
self.media_type = media_type
self.capacity = capacity
self.fs_type = fs_type
def __repr__(self):
return "{}".format(self.__dict__)
class BrickDevice(object):
def __init__(self, is_block_device: bool, initialized: bool,
mount_path: str, dev_path: str) -> None:
"""
A Gluster brick device.
:param is_block_device: bool
:param initialized: bool
:param mount_path: str to mount path
:param dev_path: os.path to dev path
"""
self.is_block_device = is_block_device
self.initialized = initialized
self.mount_path = mount_path
self.dev_path = dev_path
def __eq__(self, other):
if not isinstance(other, BrickDevice):
return False
typing.cast(other, BrickDevice)
return (self.is_block_device == other.is_block_device and
self.initialized == other.initialized and
self.mount_path == other.mount_path and
self.dev_path == other.dev_path)
def __str__(self):
return "is block device: {} initialized: {} " \
"mount path : {} dev path: {}".format(self.is_block_device,
self.initialized,
self.mount_path,
self.dev_path)
class AsyncInit(object):
def __init__(self, format_child: subprocess.Popen,
post_setup_commands: List[Tuple[str, List[str]]],
device: BrickDevice) -> None:
"""
The child process needed for this device initialization
This will be an async spawned Popen handle
:param format_child: subprocess handle.
:param post_setup_commands: After formatting is complete run these
commands to setup the filesystem ZFS needs this.
These should prob be run in sync mode
:param device: # The device we're initializing
"""
self.format_child = format_child
self.post_setup_commands = post_setup_commands
self.device = device
class Scheduler(Enum):
# Try to balance latency and throughput
Cfq = "cfq"
# Latency is most important
Deadline = "deadline"
# Throughput is most important
Noop = "noop"
def __str__(self):
return "{}".format(self.value)
class Filesystem(object):
def __init__(self) -> None:
pass
class Btrfs(Filesystem):
def __init__(self, metadata_profile: MetadataProfile, leaf_size: int,
node_size: int) -> None:
"""
Btrfs filesystem.
:param metadata_profile: MetadatProfile
:param leaf_size: int
:param node_size: int
"""
super(Btrfs, self).__init__()
self.metadata_profile = metadata_profile
self.leaf_size = leaf_size
self.node_size = node_size
def format(self, brick_device: BrickDevice) -> AsyncInit:
"""
Format a block device with a given filesystem asynchronously.
:param brick_device: BrickDevice.
:return: AsyncInit. Starts formatting immediately and gives back a
handle to access it.
"""
device = brick_device.dev_path
arg_list = ["mkfs.btrfs", "-m", str(self.metadata_profile),
"-l", self.leaf_size, "-n", str(self.node_size),
device]
# Check if mkfs.btrfs is installed
if not os.path.exists("/sbin/mkfs.btrfs"):
log("Installing btrfs utils")
apt_install(["btrfs-tools"])
return AsyncInit(format_child=subprocess.Popen(arg_list),
post_setup_commands=[],
device=brick_device)
class Ext4(Filesystem):
def __init__(self, inode_size: Optional[int],
reserved_blocks_percentage: int, stride: Optional[int],
stripe_width: Optional[int]) -> None:
"""
Ext4 filesystem.
:param inode_size: Optional[int]
:param reserved_blocks_percentage: int
:param stride: Optional[int]
:param stripe_width: Optional[int]
"""
super(Ext4, self).__init__()
if inode_size is None:
self.inode_size = 512
else:
self.inode_size = inode_size
if not reserved_blocks_percentage:
self.reserved_blocks_percentage = 0
else:
self.reserved_blocks_percentage = reserved_blocks_percentage
self.stride = stride
self.stripe_width = stripe_width
def format(self, brick_device: BrickDevice) -> AsyncInit:
"""
Format a block device with a given filesystem asynchronously.
:param brick_device: BrickDevice.
:return: AsyncInit. Starts formatting immediately and gives back a
handle to access it.
"""
device = brick_device.dev_path
arg_list = ["mkfs.ext4", "-m", str(self.reserved_blocks_percentage)]
if self.inode_size is not None:
arg_list.append("-I")
arg_list.append(str(self.inode_size))
if self.stride is not None:
arg_list.append("-E")
arg_list.append("stride={}".format(self.stride))
if self.stripe_width is not None:
arg_list.append("-E")
arg_list.append("stripe_width={}".format(self.stripe_width))
arg_list.append(device)
return AsyncInit(format_child=subprocess.Popen(arg_list),
post_setup_commands=[],
device=brick_device)
class Xfs(Filesystem):
# This is optional. Boost knobs are on by default:
# http:#xfs.org/index.php/XFS_FAQ#Q:
# _I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E
def __init__(self, block_size: Optional[int], inode_size: Optional[int],
stripe_size: Optional[int], stripe_width: Optional[int],
force: bool) -> None:
"""
Xfs filesystem
:param block_size: Optional[int]
:param inode_size: Optional[int]
:param stripe_size: Optional[int]
:param stripe_width: Optional[int]
:param force: bool
"""
super(Xfs, self).__init__()
self.block_size = block_size
if inode_size is None:
self.inode_size = 512
else:
self.inode_size = inode_size
self.stripe_size = stripe_size
self.stripe_width = stripe_width
self.force = force
def format(self, brick_device: BrickDevice) -> AsyncInit:
"""
Format a block device with a given filesystem asynchronously.
:param brick_device: BrickDevice.
:return: AsyncInit. Starts formatting immediately and gives back a
handle to access it.
"""
device = brick_device.dev_path
arg_list = ["/sbin/mkfs.xfs"]
if self.inode_size is not None:
arg_list.append("-i")
arg_list.append("size={}".format(self.inode_size))
if self.force:
arg_list.append("-f")
if self.block_size is not None:
block_size = self.block_size
if not power_of_2(block_size):
log("block_size {} is not a power of two. Rounding up to "
"nearest power of 2".format(block_size))
block_size = next_power_of_two(block_size)
arg_list.append("-b")
arg_list.append("size={}".format(block_size))
if self.stripe_size is not None and self.stripe_width is not None:
arg_list.append("-d")
arg_list.append("su={}".format(self.stripe_size))
arg_list.append("sw={}".format(self.stripe_width))
arg_list.append(device)
# Check if mkfs.xfs is installed
if not os.path.exists("/sbin/mkfs.xfs"):
log("Installing xfs utils")
apt_install(["xfsprogs"])
format_handle = subprocess.Popen(arg_list)
return AsyncInit(format_child=format_handle,
post_setup_commands=[],
device=brick_device)
class Zfs(Filesystem):
# / The default blocksize for volumes is 8 Kbytes. Any
# / power of 2 from 512 bytes to 128 Kbytes is valid.
def __init__(self, block_size: Optional[int],
compression: Optional[bool]) -> None:
"""
ZFS filesystem
:param block_size: Optional[int]
:param compression: Optional[bool]
"""
super(Zfs, self).__init__()
self.block_size = block_size
# / Enable compression on the volume. Default is False
self.compression = compression
def format(self, brick_device: BrickDevice) -> AsyncInit:
"""
Format a block device with a given filesystem asynchronously.
:param brick_device: BrickDevice.
:return: AsyncInit. Starts formatting immediately and gives back a
handle to access it.
"""
device = brick_device.dev_path
# Check if zfs is installed
if not os.path.exists("/sbin/zfs"):
log("Installing zfs utils")
apt_install(["zfsutils-linux"])
base_name = os.path.basename(device)
# Mount at /mnt/dev_name
post_setup_commands = []
arg_list = ["/sbin/zpool", "create", "-f", "-m",
"/mnt/{}".format(base_name),
base_name, device]
zpool_create = subprocess.Popen(arg_list)
if self.block_size is not None:
# If zpool creation is successful then we set these
block_size = self.block_size
log("block_size {} is not a power of two. Rounding up to nearest "
"power of 2".format(block_size))
block_size = next_power_of_two(block_size)
post_setup_commands.append(("/sbin/zfs",
["set",
"recordsize={}".format(block_size),
base_name]))
if self.compression is not None:
post_setup_commands.append(("/sbin/zfs", ["set", "compression=on",
base_name]))
post_setup_commands.append(("/sbin/zfs", ["set", "acltype=posixacl",
base_name]))
post_setup_commands.append(
("/sbin/zfs", ["set", "atime=off", base_name]))
return AsyncInit(format_child=zpool_create,
post_setup_commands=post_setup_commands,
device=brick_device)
# This assumes the device is formatted at this point
def mount_device(device: Device, mount_point: str) -> Result:
"""
mount a device at a mount point
:param device: Device.
:param mount_point: str. Place to mount to.
:return: Result with Ok or Err
"""
arg_list = []
if device.id:
arg_list.append("-U")
arg_list.append(str(device.id))
else:
arg_list.append("/dev/{}".format(device.name))
arg_list.append(mount_point)
cmd = ["mount"]
cmd.extend(arg_list)
try:
output = subprocess.check_output(cmd, stderr=subprocess.PIPE)
return Ok(output.decode('utf-8'))
except subprocess.CalledProcessError as e:
log("subprocess failed stdout: {} stderr: {} returncode: {}".format(
e.stdout, e.stderr, e.returncode), ERROR)
return Err(e.output)
def power_of_2(number: int) -> bool:
"""
Check whether this number is a power of 2
:param number: int
:return: True or False if it is a power of 2
"""
return ((number - 1) & number == 0) and not number == 0
def next_power_of_two(x: int) -> int:
"""
Get the next power of 2
:param x: int
:return: int. The next largest power of 2
"""
return 2 ** (x - 1).bit_length()
def get_size(device: pyudev.Device) -> Optional[int]:
"""
Get the size of a udev device.
:param device: pyudev.Device
:return: Optional[int] if the size is available.
"""
size = device.attributes.get('size')
if size is not None:
return int(size) * 512
return None
def get_uuid(device: pyudev.Device) -> Optional[uuid.UUID]:
"""
Get the uuid of a udev device.
:param device: pyudev.Device to check
:return: Optional[uuid.UUID] if the UUID is available.
"""
uuid_str = device.properties.get("ID_FS_UUID")
if uuid_str is not None:
return uuid.UUID(uuid_str)
return None
def get_fs_type(device: pyudev.Device) -> Optional[FilesystemType]:
"""
Get the filesystem type of a udev device.
:param device: pyudev.Device to check
:return: Optional[FilesystemType] if available
"""
fs_type_str = device.properties.get("ID_FS_TYPE")
if fs_type_str is not None:
return FilesystemType(fs_type_str)
return None
def get_media_type(device: pyudev.Device) -> MediaType:
"""
Get the media type of a udev device.
:param device: pyudev.Device to check
:return: MediaType
"""
device_sysname = device.sys_name
loop_regex = re.compile(r"loop\d+")
if loop_regex.match(device_sysname):
return MediaType.Loopback
rotation_rate = device.properties.get("ID_ATA_ROTATION_RATE_RPM")
if rotation_rate is None:
return MediaType.Unknown
elif int(rotation_rate) is 0:
return MediaType.SolidState
else:
return MediaType.Rotational
def is_block_device(device_path: str) -> Result:
"""
Check if a device is a block device
:param device_path: str path to the device to check.
:return: Result with Ok or Err
"""
context = Context()
sysname = os.path.basename(device_path)
for device in context.list_devices(subsystem='block'):
if device.sys_name == sysname:
return Ok(True)
return Err("Unable to find device with name {}".format(device_path))
def get_device_info(device_path: str) -> Result:
"""
Tries to figure out what type of device this is
:param device_path: os.path to device.
:return: Result with Ok or Err
"""
context = Context()
sysname = os.path.basename(device_path)
for device in context.list_devices(subsystem='block'):
if sysname == device.sys_name:
# Ok we're a block device
device_id = get_uuid(device)
media_type = get_media_type(device)
capacity = get_size(device)
if capacity is None:
capacity = 0
fs_type = get_fs_type(device)
return Ok(Device(id=device_id, name=sysname,
media_type=media_type, capacity=capacity,
fs_type=fs_type))
return Err("Unable to find device with name {}".format(device_path))
def device_initialized(brick_path: str) -> bool:
"""
Given a dev device path this will check to see if the device
has been formatted and mounted.
:param brick_path: os.path to the device.
"""
log("Connecting to unitdata storage")
unit_storage = kv()
log("Getting unit_info")
unit_info = unit_storage.get(brick_path)
log("{} initialized: {}".format(brick_path, unit_info))
if not unit_info:
return False
else:
return True
def scan_devices(devices: List[str]) -> Result:
"""
Check a list of devices and convert to a list of BrickDevice
:param devices: List[str] of devices to check
:return: Result with Ok or Err
"""
brick_devices = []
for brick in devices:
device_path = os.path.join(brick)
# Translate to mount location
brick_filename = os.path.basename(device_path)
log("Checking if {} is a block device".format(device_path))
block_device = is_block_device(device_path)
if block_device.is_err():
log("Skipping invalid block device: {}".format(device_path))
continue
log("Checking if {} is initialized".format(device_path))
initialized = False
if device_initialized(device_path):
initialized = True
mount_path = os.path.join(os.sep, "mnt", brick_filename)
# All devices start at initialized is False
brick_devices.append(BrickDevice(
is_block_device=block_device.value,
initialized=initialized,
dev_path=device_path,
mount_path=mount_path))
return Ok(brick_devices)
def set_elevator(device_path: str,
elevator: Scheduler) -> Result:
"""
Set the default elevator for a device
:param device_path: os.path to device
:param elevator: Scheduler
:return: Result with Ok or Err
"""
log("Setting io scheduler for {} to {}".format(device_path, elevator))
device_name = os.path.basename(device_path)
f = open("/etc/rc.local", "r")
elevator_cmd = "echo {scheduler} > /sys/block/{device}/queue/" \
"scheduler".format(scheduler=elevator, device=device_name)
script = parse(f)
if script.is_ok():
for line in script.value.commands:
if device_name in line:
line = elevator_cmd
f = open("/etc/rc.local", "w", encoding="utf-8")
bytes_written = script.value.write(f)
if bytes_written.is_ok():
return Ok(bytes_written.value)
else:
return Err(bytes_written.value)
def weekly_defrag(mount: str, fs_type: FilesystemType, interval: str) -> \
Result:
"""
Setup a weekly defrag of a mount point. Filesystems tend to fragment over
time and this helps keep Gluster's mount bricks fast.
:param mount: str to mount point location of the brick
:param fs_type: FilesystemType. Some FS types don't have defrag
:param interval: str. How often to defrag in crontab format.
:return: Result with Ok or Err.
"""
log("Scheduling weekly defrag for {}".format(mount))
crontab = os.path.join(os.sep, "etc", "cron.weekly", "defrag-gluster")
defrag_command = ""
if fs_type is FilesystemType.Ext4:
defrag_command = "e4defrag"
elif fs_type is FilesystemType.Btrfs:
defrag_command = "btrfs filesystem defragment -r"
elif fs_type is FilesystemType.Xfs:
defrag_command = "xfs_fsr"
job = "{interval} {cmd} {path}".format(
interval=interval,
cmd=defrag_command,
path=mount)
existing_crontab = []
if os.path.exists(crontab):
try:
with open(crontab, 'r') as f:
buff = f.readlines()
existing_crontab = list(filter(None, buff))
except IOError as e:
return Err(e.strerror)
existing_job_position = [i for i, x in enumerate(existing_crontab) if
mount in x]
# If we found an existing job we remove the old and insert the new job
if existing_job_position:
existing_crontab.remove(existing_job_position[0])
existing_crontab.append(job)
# Write back out and use a temporary file.
try:
fd, name = tempfile.mkstemp(dir=os.path.dirname(crontab), text=True)
out = os.fdopen(fd, 'w')
written_bytes = out.write("\n".join(existing_crontab))
written_bytes += out.write("\n")
out.close()
os.rename(name, 'root')
return Ok(written_bytes)
except IOError as e:
return Err(e.strerror)
def get_manual_bricks() -> Result:
"""
Get the list of bricks from the config.yaml
:return: Result with Ok or Err
"""
log("Gathering list of manually specified brick devices")
brick_list = []
manual_config_brick_devices = config("brick_devices")
if manual_config_brick_devices:
for brick in manual_config_brick_devices.split(" "):
if brick is not None:
brick_list.append(brick)
log("List of manual storage brick devices: {}".format(brick_list))
bricks = scan_devices(brick_list)
if bricks.is_err():
return Err(bricks.value)
return Ok(bricks.value)
def get_juju_bricks() -> Result:
"""
Get the list of bricks from juju storage.
:return: Result with Ok or Err
"""
log("Gathering list of juju storage brick devices")
# Get juju storage devices
brick_list = []
juju_config_brick_devices = storage_list()
for brick in juju_config_brick_devices:
if brick is None:
continue
s = storage_get("location", brick)
if s is not None:
brick_list.append(s.strip())
log("List of juju storage brick devices: {}".format(brick_list))
bricks = scan_devices(brick_list)
if bricks.is_err():
return Err(bricks.value)
return Ok(bricks.value)

View File

@ -1,248 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup ctdb for high availability NFSv3
"""
NOTE: Most of this is still Rust code and needs to be translated to Python
from ipaddress import ip_address, ip_network
from io import TextIOBase
import netifaces
from typing import List, Optional
class VirtualIp:
def __init__(self, cidr: ip_network, interface: str):
self.cidr = cidr
self.interface = interface
def __str__(self):
return "cidr: {} interface: {}".format(self.cidr, self.interface)
def render_ctdb_configuration(f: TextIOBase) -> int:
\"""
Write the ctdb configuration file out to disk
:param f:
:return:
\"""
bytes_written = 0
bytes_written += f.write(
b"CTDB_LOGGING=file:/var/log/ctdb/ctdb.log\n")
bytes_written += f.write(
b"CTDB_NODES=/etc/ctdb/nodes\n")
bytes_written += f.write(
b"CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses\n")
bytes_written += f.write(
b"CTDB_RECOVERY_LOCK=/mnt/glusterfs/.CTDB-lockfile\n")
return bytes_written
def render_ctdb_cluster_nodes(f: TextIOBase, cluster: List[ip_address]) -> int:
\"""
Create the public nodes file for ctdb cluster to find all the other peers
the cluster List should contain all nodes that are participating in
the cluster
:param f:
:param cluster:
:return:
\"""
bytes_written = 0
for node in cluster:
bytes_written += f.write("{}\n".format(node))
return bytes_written
def render_ctdb_public_addresses(f: TextIOBase, cluster_networks:
List[VirtualIp]) -> int:
\"""
Create the public addresses file for ctdb cluster to find all the virtual
ip addresses to float across the cluster.
:param f:
:param cluster_networks:
:return:
\"""
bytes_written = 0
for node in cluster_networks:
bytes_written += f.write("{}\n".format(node))
return bytes_written
\"""
#[test]
def test_render_ctdb_cluster_nodes() {
# Test IPV5
ctdb_cluster = vec![.std.net.IpAddr.V4(Ipv4Addr.new(192, 168, 1, 2)),
.std.net.IpAddr.V4(Ipv4Addr.new(192, 168, 1, 3))]
expected_result = "192.168.1.2\n192.168.1.3\n"
buff = .std.io.Cursor.new(vec![0 24])
render_ctdb_cluster_nodes( buff, ctdb_cluster).unwrap()
result = str.from_utf8_lossy(buff.into_inner()).into_owned()
println!("test_render_ctdb_cluster_nodes: \"{\"", result)
assert_eq!(expected_result, result)
# Test IPV6
addr1 = .std.net.Ipv6Addr.from_str(
"2001:0db8:85a3:0000:0000:8a2e:0370:7334").unwrap()
addr2 = .std.net.Ipv6Addr.from_str(
"2001:cdba:0000:0000:0000:0000:3257:9652").unwrap()
ctdb_cluster = vec![.std.net.IpAddr.V6(addr1), .std.net.IpAddr.V6(addr2)]
expected_result = "2001:db8:85a3.8a2e:370:7334\n2001:cdba.3257:9652\n"
buff = .std.io.Cursor.new(vec![0 49])
render_ctdb_cluster_nodes( buff, ctdb_cluster).unwrap()
result = str.from_utf8_lossy(buff.into_inner()).into_owned()
println!("test_render_ctdb_cluster_nodes ipv6: \"{\"", result)
assert_eq!(expected_result, result)
\"""
def get_virtual_addrs(f: TextIOBase) -> List[VirtualIp]:
\"""
Return all virtual ip cidr networks that are being managed by ctdb
located at file f. /etc/ctdb/public_addresses is the usual location
:param f:
:return:
\"""
networks = []
buf = f.readlines()
for line in buf:
parts = line.split(" ")
if parts.len() < 2:
raise ValueError("Unable to parse network: {}".format(line))
try:
addr = ip_network(parts[0])
interface = parts[1].strip()
networks.append(VirtualIp(
cidr=addr,
interface=interface,
))
except ValueError:
raise
return networks
def get_interface_for_ipv4_address(cidr_address: ip_network,
interfaces: List[NetworkInterface]) \
-> Optional[str]:
# Loop through every interface
for iface in interfaces:
# Loop through every ip address the interface is serving
if Some(ip_addrs) = iface.ips:
for iface_ip in ip_addrs:
match iface_ip
IpAddr.V4(v4_addr) =>
if cidr_address.contains(v4_addr):
return iface.name
else:
# No match
continue
_ => {
# It's a ipv6 address. Can't match against ipv4
continue
return None
def get_interface_for_address(cidr_address: ip_interface) -> Optional<str>:
# Return the network interface that serves the subnet for this ip address
interfaces = netifaces.interfaces()
for interface in interfaces:
ip_list = netifaces.ifaddresses(interface)
for ip in ip_list:
IpNetwork.V4(v4_addr) => get_interface_for_ipv4_address(v4_addr,
interfaces),
IpNetwork.V6(v6_addr) => get_interface_for_ipv6_address(v6_addr,
interfaces),
return None
\"""
#[test]
def test_parse_virtual_addrs() {
test_str = "10.0.0.6/24 eth2\n10.0.0.7/24 eth2".as_bytes()
c = .std.io.Cursor.new(test_str)
result = get_virtual_addrs( c).unwrap()
println!("test_parse_virtual_addrs: {:", result)
expected =
vec![VirtualIp {
cidr: IpNetwork.V4(Ipv4Network.new(Ipv4Addr(10, 0, 0, 6), 24)
.unwrap()),
interface: "eth2".to_string(),
,
VirtualIp {
cidr: IpNetwork.V4(Ipv4Network.new(Ipv4Addr(10, 0, 0, 7), 24)
.unwrap()),
interface: "eth2".to_string(),
]
assert_eq!(expected, result)
#[test]
def test_parse_virtual_addrs_v6() {
test_str = "2001:0db8:85a3:0000:0000:8a2e:0370:7334/24 \
eth2\n2001:cdba:0000:0000:0000:0000:3257:9652/24 eth2"
.as_bytes()
c = .std.io.Cursor.new(test_str)
result = get_virtual_addrs( c).unwrap()
println!("test_get_virtual_addrs: {:", result)
addr1 = Ipv6Addr.from_str("2001:0db8:85a3:0000:0000:8a2e:0370:7334")
addr2 = Ipv6Addr.from_str("2001:cdba:0000:0000:0000:0000:3257:9652")
expected = vec![VirtualIp {
cidr: IpNetwork.V6(Ipv6Network.new(addr1, 24)),
interface: "eth2".to_string(),
,
VirtualIp {
cidr: IpNetwork.V6(Ipv6Network.new(addr2, 24)),
interface: "eth2".to_string(),
]
assert_eq!(expected, result)
\"""
def get_ctdb_nodes(f: TextIOBase) -> List[ip_address]:
\"""
Return all ctdb nodes that are contained in the file f
/etc/ctdb/nodes is the usual location
:param f:
:return:
\"""
addrs = []
buf = f.readlines()
for line in buf:
try:
addr = ip_address(line)
addrs.append(addr)
except ValueError:
raise
return addrs
\"""
#[test]
def test_get_ctdb_nodes() {
test_str = "10.0.0.1\n10.0.0.2".as_bytes()
c = .std.io.Cursor.new(test_str)
result = get_ctdb_nodes( c).unwrap()
println!("test_get_ctdb_nodes: {:", result)
addr1 = Ipv4Addr.new(10, 0, 0, 1)
addr2 = Ipv4Addr.new(10, 0, 0, 2)
expected = vec![IpAddr.V4(addr1), IpAddr.V4(addr2)]
assert_eq!(expected, result)
#[test]
def test_get_ctdb_nodes_v6() {
test_str = "2001:0db8:85a3:0000:0000:8a2e:0370:7334\n2001:cdba:"
"0000:0000:0000:0000:3257:9652"
c = .std.io.Cursor.new(test_str)
result = get_ctdb_nodes( c).unwrap()
println!("test_get_ctdb_nodes_v6: {:", result)
addr1 = Ipv6Addr.from_str("2001:0db8:85a3:0000:0000:8a2e:0370:7334")
addr2 = Ipv6Addr.from_str("2001:cdba:0000:0000:0000:0000:3257:9652")
expected = vec![IpAddr.V6(addr1), IpAddr.V6(addr2)]
assert_eq!(expected, result)
\"""
"""

View File

@ -1,222 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from typing import List, Optional, Any, IO
from result import Err, Ok, Result
class FsEntry(object):
def __init__(self, fs_spec: str, mountpoint: str, vfs_type: str,
mount_options: List[str], dump: bool, fsck_order: int) -> \
None:
"""
For help with what these fields mean consult: `man fstab` on linux.
:param fs_spec: The device identifer
:param mountpoint: the mount point
:param vfs_type: which filesystem type it is
:param mount_options: mount options
:param dump: This field is used by dump(8) to determine which
filesystems need to be dumped
:param fsck_order: This field is used by fsck(8) to determine the
order in which filesystem checks are done at boot time.
"""
self.fs_spec = fs_spec
self.mountpoint = mountpoint
self.vfs_type = vfs_type
self.mount_options = mount_options
self.dump = dump
self.fsck_order = fsck_order
def __eq__(self, item):
return (item.fs_spec == self.fs_spec and
item.mountpoint == self.mountpoint and
item.vfs_type == self.vfs_type and
item.mount_options == self.mount_options and
item.dump == self.dump and
item.fsck_order == self.fsck_order)
def __str__(self):
return "{} {} {} {} {} {}".format(self.fs_spec,
self.mountpoint,
self.vfs_type,
",".join(self.mount_options),
self.dump,
self.fsck_order)
class FsTab(object):
def __init__(self, location: Optional[str]) -> None:
"""
A class to manage an fstab
:param location: The location of the fstab. Defaults to /etc/fstab
"""
if location:
self.location = location
else:
self.location = os.path.join(os.sep, 'etc', 'fstab')
def get_entries(self) -> Result:
"""
Takes the location to the fstab and parses it. On linux variants
this is usually /etc/fstab. On SVR4 systems store block devices and
mount point information in /etc/vfstab file. AIX stores block device
and mount points information in /etc/filesystems file.
:return: Result with Ok or Err
"""
with open(self.location, "r") as file:
entries = self.parse_entries(file)
if entries.is_err():
return Err(entries.value)
return Ok(entries.value)
def parse_entries(self, file: IO[Any]) -> Result:
"""
Parse fstab entries
:param file: TextIOWrapper file handle to the fstab
:return: Result with Ok or Err
"""
entries = []
contents = file.readlines()
for line in contents:
if line.startswith("#"):
continue
parts = line.split()
if len(parts) != 6:
continue
fsck_order = int(parts[5])
entries.append(FsEntry(
fs_spec=parts[0],
mountpoint=os.path.join(parts[1]),
vfs_type=parts[2],
mount_options=parts[3].split(","),
dump=False if parts[4] == "0" else True,
fsck_order=fsck_order))
return Ok(entries)
def save_fstab(self, entries: List[FsEntry]) -> Result:
"""
Save an fstab to disk
:param entries: List[FsEntry]
:return: Result with Ok or Err
"""
try:
with open(self.location, "w") as f:
bytes_written = 0
for entry in entries:
bytes_written += f.write(
"{spec} {mount} {vfs} {options} {dump} "
"{fsck}\n".format(spec=entry.fs_spec,
mount=entry.mountpoint,
vfs=entry.vfs_type,
options=",".join(
entry.mount_options),
dump="1" if entry.dump else "0",
fsck=entry.fsck_order))
return Ok(bytes_written)
except OSError as e:
return Err(e.strerror)
def add_entry(self, entry: FsEntry) -> Result:
"""
Add a new entry to the fstab. If the fstab previously did not
contain this entry
then true is returned. Otherwise it will return false indicating
it has been updated
:param entry: FsEntry to add
:return: Result with Ok or Err
"""
entries = self.get_entries()
if entries.is_err():
return Err(entries.value)
position = [i for i, x in enumerate(entries.value) if
entry == x]
if len(position) is not 0:
entries.value.remove(position[0])
entries.value.append(entry)
save_result = self.save_fstab(entries.value)
if save_result.is_err():
return Err(save_result.value)
if len(position) is not 0:
return Ok(False)
else:
return Ok(True)
def add_entries(self, entries: List[FsEntry]) -> Result:
"""
Bulk add a new entries to the fstab.
:param entries: List[FsEntry] to add
:return: Result with Ok or Err
"""
existing_entries = self.get_entries()
if existing_entries.is_err():
return Err(existing_entries.value)
for new_entry in entries:
if new_entry in existing_entries.value:
# The old entries contain this so lets update it
position = [i for i, x in enumerate(existing_entries.value) if
new_entry == x]
del existing_entries.value[position]
existing_entries.value.append(new_entry)
else:
existing_entries.value.append(new_entry),
self.save_fstab(existing_entries.value)
return Ok(())
def remove_entry_by_spec(self, spec: str) -> Result:
"""
Remove the fstab entry that corresponds to the spec given.
IE: first fields match
Returns true if the value was present in the fstab.
:param spec: str. fstab spec to match against and remove
:return: Result with Ok or Err
"""
entries = self.get_entries()
if entries.is_err():
return Err(entries.value)
position = [i for i, x in enumerate(entries.value) if
spec == x.fs_spec]
if len(position) is not 0:
del entries.value[position[0]]
self.save_fstab(entries.value)
return Ok(True)
else:
return Ok(False)
def remove_entry_by_mountpoint(self, mount: str) -> Result:
"""
Remove the fstab entry that corresponds to the mount given.
IE: first fields match
Returns true if the value was present in the fstab.
:param mount: str. fstab mount to match against and remove
:return: Result with Ok or Err
"""
entries = self.get_entries()
if entries.is_err():
return Err(entries.value)
position = [i for i, x in enumerate(entries.value) if
mount == x.mountpoint]
if len(position) is not 0:
del entries.value[position[0]]
self.save_fstab(entries.value)
return Ok(True)
else:
return Ok(False)

View File

@ -1,34 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from .volume import Brick
def get_self_heal_count(brick: Brick) -> int:
"""
Find the self heal count for a given brick.
:param brick: the brick to probe for the self heal count.
:return int: the number of files that need healing
"""
brick_path = "{}/.glusterfs/indices/xattrop".format(brick.path)
# The gfids which need healing are those files which do not start
# with 'xattrop'.
count = 0
for f in os.listdir(brick_path):
if not f.startswith('xattrop'):
count += 1
return count

View File

@ -1,504 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import os
import subprocess
import time
from enum import Enum
from typing import List, Optional, Dict
from charmhelpers.contrib.storage.linux.ceph import filesystem_mounted
from charmhelpers.core.hookenv import (ERROR, log, INFO, config,
status_set)
from charmhelpers.core.host import umount, add_to_updatedb_prunepath
from charmhelpers.core.unitdata import kv
from result import Err, Ok, Result
from .block import (FilesystemType, Scheduler, get_device_info,
BrickDevice, Zfs, mount_device, weekly_defrag,
set_elevator, get_juju_bricks, MetadataProfile,
Xfs, Btrfs, Ext4, get_manual_bricks)
from .fstab import FsEntry, FsTab
from .peer import Peer, peer_status, State
from .volume import Brick, Volume
class FailoverDomain(Enum):
"""
"""
Host = 'host'
Rack = 'rack'
Row = 'row'
DataCenter = 'datacenter'
Room = 'room'
class Status(Enum):
"""
Need more expressive return values so we can wait on peers
"""
Created = 0
WaitForMorePeers = 1
InvalidConfig = 2
FailedToCreate = 3
FailedToStart = 4
Expanded = 5
def brick_and_server_product(peers: Dict[str, Dict],
failover: FailoverDomain = FailoverDomain.Host) \
-> List[Brick]:
"""
{
'glusterfs-0': {
'address': '192.168.10.1',
'bricks': ['/mnt/vdb1', '/mnt/vdb2'],
'location': ['host', 'rack-a', 'row-a', 'datacenter-1']
},
'glusterfs-1': {
'address': '192.168.10.2',
'bricks': ['/mnt/vdb1', '/mnt/vdb2', '/mnt/vdb3'],
'location': ['host', 'rack-a', 'row-b', 'datacenter-1']
},
}
Produce a list of Brick's that can be sent to a gluster cli volume
creation command. Tries to take into account failover domain. Defaults
to host level failover if none is found.
:param peers: A list of peers to match up against brick paths
:param paths: A list of brick mount paths to match up against peers
:param failover: FailoverDomaon to use
:return: List[Brick]. Returns a list of Brick's that can be sent in
order to the gluster cli and create a volume with the correct failover
domain and replicas.
"""
_peers = copy.deepcopy(peers)
product = []
while all(len(_peers[i]['bricks']) > 0 for i in _peers.keys()):
for k in _peers.keys():
host = _peers[k]
log("host: {}".format(host))
bricks = host['bricks']
log("bricks: {}".format(bricks))
brick = Brick(
peer=Peer(uuid=None,
hostname=host['address'],
status=None),
path=bricks[0],
is_arbiter=False,
brick_uuid=None)
del bricks[0]
product.append(brick)
return product
def check_for_new_devices() -> Result:
"""
Scan for new hard drives to format and turn into a GlusterFS brick
:return:
"""
log("Checking for new devices", INFO)
log("Checking for ephemeral unmount")
ephemeral_unmount()
brick_devices = []
# Get user configured storage devices
manual_brick_devices = get_manual_bricks()
if manual_brick_devices.is_err():
return Err(manual_brick_devices.value)
brick_devices.extend(manual_brick_devices.value)
# Get the juju storage block devices
juju_config_brick_devices = get_juju_bricks()
if juju_config_brick_devices.is_err():
return Err(juju_config_brick_devices.value)
brick_devices.extend(juju_config_brick_devices.value)
log("storage devices: {}".format(brick_devices))
format_handles = []
brick_paths = []
# Format all drives in parallel
for device in brick_devices:
if not device.initialized:
log("Calling initialize_storage for {}".format(device.dev_path))
# Spawn all format commands in the background
handle = initialize_storage(device=device)
if handle.is_err():
log("initialize storage for {} failed with err: {}".format(
device, handle.value))
return Err(Status.FailedToCreate)
format_handles.append(handle.value)
else:
# The device is already initialized, lets add it to our
# usable paths list
log("{} is already initialized".format(device.dev_path))
brick_paths.append(device.mount_path)
# Wait for all children to finish formatting their drives
for handle in format_handles:
log("format_handle: {}".format(handle))
output_result = handle.format_child.wait()
if output_result is 0:
# success
# 1. Run any post setup commands if needed
finish_initialization(handle.device.dev_path)
brick_paths.append(handle.device.mount_path)
else:
# Failed
log("Device {} formatting failed with error: {}. Skipping".format(
handle.device.dev_path, output_result), ERROR)
log("Usable brick paths: {}".format(brick_paths))
return Ok(brick_paths)
def ephemeral_unmount() -> Result:
"""
Unmount amazon ephemeral mount points.
:return: Result with Ok or Err depending on the outcome of unmount.
"""
mountpoint = config("ephemeral_unmount")
if mountpoint is None:
return Ok(())
# Remove the entry from the fstab if it's set
fstab = FsTab(os.path.join(os.sep, "etc", "fstab"))
log("Removing ephemeral mount from fstab")
fstab.remove_entry_by_mountpoint(mountpoint)
if filesystem_mounted(mountpoint):
result = umount(mountpoint=mountpoint)
if not result:
return Err("unmount of {} failed".format(mountpoint))
# Unmounted Ok
log("{} unmounted".format(mountpoint))
return Ok(())
# Not mounted
return Ok(())
def find_new_peers(peers: Dict[str, Dict], volume_info: Volume) -> \
Dict[str, Dict]:
"""
Checks two lists of peers to see if any new ones not already serving
a brick have joined.
:param peers: List[Peer]. List of peers to check.
:param volume_info: Volume. Existing volume info
:return: List[Peer] with any peers not serving a brick that can now
be used.
"""
new_peers = {}
for peer in peers:
# If this peer is already in the volume, skip it
existing_peer = any(
brick.peer.hostname == peers[peer]['address'] for brick in
volume_info.bricks)
if not existing_peer:
# Try to match up by hostname
new_peers[peer] = peers[peer]
return new_peers
def finish_initialization(device_path: str) -> Result:
"""
Once devices have been formatted this is called to run fstab entry setup,
updatedb exclusion, weekly defrags, etc.
:param device_path: os.path to device
:return: Result with Ok or Err
"""
filesystem_type = FilesystemType(config("filesystem_type"))
defrag_interval = config("defragmentation_interval")
disk_elevator = config("disk_elevator")
scheduler = Scheduler(disk_elevator)
mount_path = os.path.join(os.sep, 'mnt', os.path.basename(device_path))
unit_storage = kv()
device_info = get_device_info(device_path)
if device_info.is_err():
return Err(device_info.value)
log("device_info: {}".format(device_info.value), INFO)
# Zfs automatically handles mounting the device
if filesystem_type is not filesystem_type.Zfs:
log("Mounting block device {} at {}".format(device_path, mount_path),
INFO)
status_set(workload_state="maintenance",
message="Mounting block device {} at {}".format(
device_path, mount_path))
if not os.path.exists(mount_path):
log("Creating mount directory: {}".format(mount_path), INFO)
os.makedirs(mount_path)
mount_result = mount_device(device_info.value, mount_path)
if mount_result.is_err():
log("mount failed {}".format(mount_result.value), ERROR)
status_set(workload_state="maintenance", message="")
fstab_entry = FsEntry(
fs_spec="UUID={}".format(device_info.value.id),
mountpoint=mount_path,
vfs_type=device_info.value.fs_type,
mount_options=["noatime", "inode64"],
dump=False,
fsck_order=2)
log("Adding {} to fstab".format(fstab_entry))
fstab = FsTab(os.path.join("/etc/fstab"))
fstab.add_entry(fstab_entry)
unit_storage.set(device_path, True)
# Actually save the data. unit_storage.set does not save the value
unit_storage.flush()
log("Removing mount path from updatedb {}".format(mount_path), INFO)
add_to_updatedb_prunepath(mount_path)
weekly_defrag(mount_path, filesystem_type, defrag_interval)
set_elevator(device_path, scheduler)
return Ok(())
def get_brick_list(peers: Dict[str, Dict], volume: Optional[Volume]) -> Result:
"""
This function will take into account the replication level and
try its hardest to produce a list of bricks that satisfy this:
1. Are not already in the volume
2. Sufficient hosts to satisfy replication level
3. Stripped across the hosts
If insufficient hosts exist to satisfy this replication level this will
return no new bricks to add
Default to 3 replicas if the parsing fails
:param peers:
:param volume:
:return:
"""
# brick_devices = []
replica_config = config("replication_level")
replicas = 3
try:
replicas = int(replica_config)
except ValueError:
# Use default
pass
if volume is None:
log("Volume is none")
# number of bricks % replicas == 0 then we're ok to proceed
if len(peers) < replicas:
# Not enough peers to replicate across
log("Not enough peers to satisfy the replication level for the Gluster \
volume. Waiting for more peers to join.")
return Err(Status.WaitForMorePeers)
elif len(peers) == replicas:
# Case 1: A perfect marriage of peers and number of replicas
log("Number of peers and number of replicas match")
log("{}".format(peers))
return Ok(brick_and_server_product(peers))
else:
# Case 2: We have a mismatch of replicas and hosts
# Take as many as we can and leave the rest for a later time
count = len(peers) - (len(peers) % replicas)
new_peers = copy.deepcopy(peers)
# Drop these peers off the end of the list
to_remove = list(new_peers.keys())[count:]
for key in to_remove:
del new_peers[key]
log("Too many new peers. Dropping {} peers off the list".format(
count))
return Ok(brick_and_server_product(new_peers))
else:
# Existing volume. Build a differential list.
log("Existing volume. Building differential brick list ")
new_peers = find_new_peers(peers, volume)
if len(new_peers) < replicas:
log("New peers found are less than needed by the replica count")
return Err(Status.WaitForMorePeers)
elif len(new_peers) == replicas:
log("New peers and number of replicas match")
return Ok(brick_and_server_product(new_peers))
else:
count = len(new_peers) - (len(new_peers) % replicas)
# Drop these peers off the end of the list
log("Too many new peers. Dropping {} peers off the list".format(
count))
new_peers = copy.deepcopy(peers)
# Drop these peers off the end of the list
to_remove = list(new_peers.keys())[count:]
for key in to_remove:
del new_peers[key]
return Ok(brick_and_server_product(new_peers))
def initialize_storage(device: BrickDevice) -> Result:
"""
Format and mount block devices to ready them for consumption by Gluster
Return an Initialization struct
:param device: BrickDevice. The device to format.
:return: Result with Ok or Err.
"""
filesystem_type = FilesystemType(config("filesystem_type"))
log("filesystem_type selected: {}".format(filesystem_type))
# Custom params
stripe_width = config("raid_stripe_width")
stripe_size = config("raid_stripe_size")
inode_size = config("inode_size")
# Format with the default XFS unless told otherwise
if filesystem_type is FilesystemType.Xfs:
log("Formatting block device with XFS: {}".format(device.dev_path),
INFO)
status_set(workload_state="maintenance",
message="Formatting block device with XFS: {}".format(
device.dev_path))
xfs = Xfs(
block_size=None,
force=True,
inode_size=inode_size,
stripe_size=stripe_size,
stripe_width=stripe_width,
)
return Ok(xfs.format(brick_device=device))
elif filesystem_type is FilesystemType.Ext4:
log("Formatting block device with Ext4: {}".format(device.dev_path),
INFO)
status_set(workload_state="maintenance",
message="Formatting block device with Ext4: {}".format(
device.dev_path))
ext4 = Ext4(
inode_size=inode_size,
reserved_blocks_percentage=0,
stride=stripe_size,
stripe_width=stripe_width,
)
return Ok(ext4.format(brick_device=device))
elif filesystem_type is FilesystemType.Btrfs:
log("Formatting block device with Btrfs: {}".format(device.dev_path),
INFO)
status_set(workload_state="maintenance",
message="Formatting block device with Btrfs: {}".format(
device.dev_path))
btrfs = Btrfs(
leaf_size=0,
node_size=0,
metadata_profile=MetadataProfile.Single)
return Ok(btrfs.format(brick_device=device))
elif filesystem_type is FilesystemType.Zfs:
log("Formatting block device with ZFS: {:}".format(device.dev_path),
INFO)
status_set(workload_state="maintenance",
message="Formatting block device with ZFS: {:}".format(
device.dev_path))
zfs = Zfs(
compression=None,
block_size=None,
)
return Ok(zfs.format(brick_device=device))
else:
log(
"Unknown filesystem. Defaulting to formatting with XFS: {}".format(
device.dev_path),
INFO)
status_set(workload_state="maintenance",
message="Formatting block device with XFS: {}".format(
device.dev_path))
xfs = Xfs(
block_size=None,
force=True,
inode_size=inode_size,
stripe_width=stripe_width,
stripe_size=stripe_size)
return Ok(xfs.format(brick_device=device))
def run_command(command: str, arg_list: List[str], script_mode: bool) -> \
str:
"""
:param command: str. The command to run.
:param arg_list: List[str]. The argument list
:param script_mode: . Should the command be run in script mode.
:return: str. This returns stdout
:raises: subprocess.CalledProcessError in the event of a failure
"""
cmd = [command]
if script_mode:
cmd.append("--mode=script")
for arg in arg_list:
cmd.append(arg)
try:
return subprocess.check_output(cmd, stderr=subprocess.PIPE).decode(
'utf-8')
except subprocess.CalledProcessError as e:
log("subprocess failed stdout: {} stderr: {} returncode: {}".format(
e.stdout, e.stderr, e.returncode), ERROR)
raise
def translate_to_bytes(value: str) -> float:
"""
This is a helper function to convert values such as 1PB into a bytes.
:param value: str. Size representation to be parsed
:return: float. Value in bytes
"""
k = 1024
sizes = [
"KB",
"MB",
"GB",
"TB",
"PB"
]
if value.endswith("Bytes"):
return float(value.rstrip("Bytes"))
else:
for power, size in enumerate(sizes, 1):
if value.endswith(size):
return float(value.rstrip(size)) * (k ** power)
raise ValueError("Cannot translate value")
def peers_are_ready(peer_list: List[Peer]) -> bool:
"""
Checks to see if all peers are ready. Peers go through a number of states
before they are ready to be added to a volume.
:param peer_list: Result with a List[Peer]
:return: True or False if all peers are ready
"""
log("Checking if peers are ready")
return all(peer.status == State.Connected for peer in peer_list)
def wait_for_peers() -> Result:
"""
HDD's are so slow that sometimes the peers take long to join the cluster.
This will loop and wait for them ie spinlock
:return: Result with Err if waited too long for the peers to become ready.
"""
log("Waiting for all peers to enter the Peer in Cluster status")
status_set(workload_state="maintenance",
message="Waiting for all peers to enter the "
"\"Peer in Cluster status\"")
iterations = 0
while not peers_are_ready(peer_status()):
time.sleep(1)
iterations += 1
if iterations > 600:
return Err("Gluster peers failed to connect after 10 minutes")
return Ok(())

View File

@ -1,33 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from charmhelpers.core.hookenv import add_metric
import os.path
def collect_metrics():
"""
Gather metrics about gluster mount and log them to juju metrics
:rtype: object
"""
p = os.path.join(os.sep, "mnt", "glusterfs")
mount_stats = os.statvfs(p)
# block size * total blocks
total_space = mount_stats.f_blocks * mount_stats.f_bsize
free_space = mount_stats.f_bfree * mount_stats.f_bsize
# capsize only operates on i64 values
used_space = total_space - free_space
gb_used = used_space / 1024 / 1024 / 1024
# log!(format!("Collecting metric gb-used {}", gb_used), Info)
add_metric("gb-used", "{}".format(gb_used))

View File

@ -1,216 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
from enum import Enum
from typing import Optional, List
from charmhelpers.core.hookenv import log
from gluster.cli import peer as gpeer, GlusterCmdException
from gluster.cli.parsers import GlusterCmdOutputParseError
from ..utils.utils import resolve_to_ip
# A enum representing the possible States that a Peer can be in
class State(Enum):
Connected = "connected"
Disconnected = "disconnected"
Unknown = ""
EstablishingConnection = "establishing connection"
ProbeSentToPeer = "probe sent to peer"
ProbeReceivedFromPeer = "probe received from peer"
PeerInCluster = "peer in cluster"
AcceptedPeerRequest = "accepted peer in cluster"
SentAndReceivedPeerRequest = "sent and received peer request"
PeerRejected = "peer rejected"
PeerDetachInProgress = "peer detach in progress"
ConnectedToPeer = "connected to peer"
PeerIsConnectedAndAccepted = "peer is connected and accepted"
InvalidState = "invalid state"
def __str__(self) -> str:
return "{}".format(self.value)
@staticmethod
def from_str(string: str):
"""Parses the string to return the appropriate State instance.
The python3 enum class already has some attempt to find the correct
object when the State class is constructed with a value, but may
not be obvious what's going on. Parsing a string allows us to
create a more rich version of data stored in the enum (e.g. a tuple)
but also allows our own custom parsing.
:param string: the string to parse
:return State: the corresponding State object
:raises ValueError: if the string cannot parse to a State object.
"""
if string:
for state in State:
if state.value.lower() == string.lower():
return state
raise ValueError("Unable to find State for string: {}".format(string))
"""
@staticmethod
def from_str(s: str):
s = s.lower()
if s == 'connected':
return State.Connected
elif s == 'disconnected':
return State.Disconnected
elif s == 'establishing connection':
return State.EstablishingConnection
elif s == 'probe sent to peer':
return State.ProbeSentToPeer
elif s == 'probe received from peer':
return State.ProbeReceivedFromPeer
elif s == 'peer in cluster':
return State.PeerInCluster
elif s == 'accepted peer in cluster':
return State.AcceptedPeerRequest
elif s == "sent and received peer request":
return State.SentAndReceivedPeerRequest
elif s == "peer rejected":
return State.PeerRejected
elif s == "peer detach in progress":
return State.PeerDetachInProgress
elif s == "connected to peer":
return State.ConnectedToPeer
elif s == "peer is connected and accepted":
return State.PeerIsConnectedAndAccepted
elif s == "invalid state":
return State.InvalidState
else:
return None
"""
class Peer(object):
def __init__(self, uuid: uuid.UUID, hostname: str,
status: Optional[State]) -> None:
"""
A Gluster Peer. A Peer is roughly equivalent to a server in Gluster.
:param uuid: uuid.UUID. Unique identifier of this peer
:param hostname: str. ip address of the peer
:param status: Optional[State] current State of the peer
"""
self.uuid = uuid
self.hostname = hostname
self.status = status
def __eq__(self, other):
return self.uuid == other.uuid
def __str__(self):
return "UUID: {} Hostname: {} Status: {}".format(
self.uuid,
self.hostname,
self.status)
def get_peer(hostname: str) -> Optional[Peer]:
"""
This will query the Gluster peer list and return a Peer class for the peer
:param hostname: str. ip address of the peer to get
:return Peer or None in case of not found
"""
peer_pool = peer_list()
for node in peer_pool:
if node.hostname == hostname:
return node
return None
def peer_status() -> List[Peer]:
"""
Runs gluster peer status and returns the status of all the peers
in the cluster
Returns GlusterError if the command failed to run
:return: List of Peers
"""
try:
status = gpeer.status()
peers = []
for peer in status:
p = Peer(uuid=uuid.UUID(peer['uuid']),
status=State.from_str(peer['connected']),
hostname=peer['hostname'])
peers.append(p)
return peers
except GlusterCmdOutputParseError:
raise
def peer_list() -> List[Peer]:
"""
List all peers including localhost
Runs gluster pool list and returns a List[Peer] representing all the peers
in the cluster
This also returns information for the localhost as a Peer. peer_status()
does not
# Failures
Returns GlusterError if the command failed to run
"""
try:
parsed_peers = []
pool_list = gpeer.pool()
for value in pool_list:
ip_addr = resolve_to_ip(value['hostname'])
if ip_addr.is_err():
log("Failed to resolve {} to ip address, skipping peer".format(
value['hostname']))
continue
parsed_peers.append(
Peer(
hostname=ip_addr.value,
uuid=uuid.UUID(value['uuid']),
status=State.from_str(value['connected'])))
return parsed_peers
except GlusterCmdOutputParseError:
raise
def peer_probe(hostname: str) -> None:
"""
Probe a peer and prevent double probing
Adds a new peer to the cluster by hostname or ip address
:param hostname: String. Add a host to the cluster
:return:
"""
try:
current_peers = peer_list()
for current_peer in current_peers:
if current_peer.hostname == hostname:
# Bail instead of double probing
return
except GlusterCmdOutputParseError:
raise
try:
gpeer.probe(hostname)
except GlusterCmdException:
raise
def peer_remove(hostname: str) -> None:
"""
Removes a peer from the cluster by hostname or ip address
:param hostname: String. Hostname to remove from the cluster
:return:
"""
try:
gpeer.detach(hostname)
except GlusterCmdException:
raise

View File

@ -1,95 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import io
import os
from io import TextIOBase
from charms.reactive import when_file_changed, when_not, set_state
from charmhelpers.core.hookenv import config, log, status_set
from charmhelpers.core.host import service_start
from charmhelpers.fetch import apt_install
def render_samba_configuration(f: TextIOBase, volume_name: str) -> int:
"""
Write the samba configuration file out to disk
:param f: TextIOBase handle to the sambe config file
:param volume_name: str
:return: int of bytes written
"""
bytes_written = 0
bytes_written += f.write("[{}]\n".format(volume_name))
bytes_written += f.write(b"path = /mnt/glusterfs\n"
b"read only = no\n"
b"guest ok = yes\n"
b"kernel share modes = no\n"
b"kernel oplocks = no\n"
b"map archive = no\n"
b"map hidden = no\n"
b"map read only = no\n"
b"map system = no\n"
b"store dos attributes = yes\n")
return bytes_written
@when_file_changed('/etc/samba/smb.conf')
def samba_config_changed() -> bool:
"""
Checks whether a samba config file has changed or not.
:param volume_name: str.
:return: True or False
"""
volume_name = config("volume_name")
samba_path = os.path.join(os.sep, 'etc', 'samba', 'smb.conf')
if os.path.exists(samba_path):
# Lets check if the smb.conf matches what we're going to write.
# If so then it was already setup and there's nothing to do
with open(samba_path) as existing_config:
old_config = existing_config.readlines()
new_config = io.StringIO()
render_samba_configuration(new_config, volume_name)
if "".join(new_config) == "".join(old_config):
# configs are identical
return False
else:
return True
# Config doesn't exist.
return True
@when_not('samba.installed')
def setup_samba():
"""
Installs and starts up samba
:param volume_name: str. Gluster volume to start samba on
"""
volume_name = config("volume_name")
cifs_config = config("cifs")
if cifs_config is None:
# Samba isn't enabled
return
if not samba_config_changed(volume_name):
# log!("Samba is already setup. Not reinstalling")
return
status_set("Maintenance", "Installing Samba")
apt_install(["samba"])
status_set("Maintenance", "Configuring Samba")
with open(os.path.join(os.sep, 'etc', 'samba', 'smb.conf')) as samba_conf:
bytes_written = render_samba_configuration(samba_conf, volume_name)
log("Wrote {} bytes to /etc/samba/smb.conf", bytes_written)
log("Starting Samba service")
status_set("Maintenance", "Starting Samba")
service_start("smbd")
set_state('samba.installed')

View File

@ -1,74 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from io import TextIOBase
from typing import List, Any, IO
from result import Ok, Result
__author__ = 'Chris Holcombe <chris.holcombe@canonical.com>'
class ShellScript(object):
def __init__(self, interpreter: str, comments: List[str],
commands: List[str]) -> None:
"""
A very basic representation of a shell script. There is an interpreter,
some comments and a list of commands the interpreter to use
Create a new ShellScript object
:param interpreter: str The interpreter to use ie /bin/bash etc
:param comments: List[str] of comments
:param commands: List[str] of commands
"""
self.interpreter = interpreter
# Any comments here will be joined with newlines when written back out
self.comments = comments
# Any commands here will be joined with newlines when written back out
self.commands = commands
def write(self, f: TextIOBase) -> Result:
# Write the run control class back out to a file
bytes_written = 0
bytes_written += f.write("{}\n".format(self.interpreter))
bytes_written += f.write("\n".join(self.comments))
bytes_written += f.write("\n")
bytes_written += f.write("\n".join(self.commands))
bytes_written += f.write("\n")
return Ok(bytes_written)
def parse(f: IO[Any]) -> Result:
"""
Parse a shellscript and return a ShellScript
:param f: TextIOBase handle to the shellscript file
:return: Result with Ok or Err
"""
comments = []
commands = []
interpreter = ""
buf = f.readlines()
for line in buf:
trimmed = line.strip()
if trimmed.startswith("#!"):
interpreter = trimmed
elif trimmed.startswith("#"):
comments.append(str(trimmed))
else:
# Skip blank lines
if trimmed:
commands.append(str(trimmed))
return Ok(ShellScript(interpreter=interpreter,
comments=comments,
commands=commands))

File diff suppressed because it is too large Load Diff

View File

@ -1 +0,0 @@
__author__ = 'Chris Holcombe <chris.holcombe@canonical.com>'

View File

@ -1,55 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ipaddress import ip_address
import xml.etree.ElementTree as etree
from charmhelpers.contrib.openstack.utils import get_host_ip
from result import Err, Ok, Result
__author__ = 'Chris Holcombe <chris.holcombe@canonical.com>'
def check_return_code(tree: etree.Element) -> Result:
"""
Helper function to make processing xml easier. This checks
to see if gluster returned an error code
:param tree: xml tree
:return: Result with Ok or Err
"""
return_code = 0
err_string = ""
for child in tree:
if child.tag == 'opRet':
return_code = int(child.text)
elif child.tag == 'opErrstr':
err_string = child.text
if return_code != 0:
return Err(err_string)
return Ok()
def resolve_to_ip(address: str) -> Result:
"""
Resolves an dns address to an ip address. Relies on dig
:param address: String. Hostname to resolve to an ip address
:return: result
"""
ip_addr = get_host_ip(hostname=address)
try:
parsed = ip_address(address=ip_addr)
return Ok(parsed)
except ValueError:
return Err("failed to parse ip address: {}".format(ip_addr))

View File

@ -1,34 +0,0 @@
name: glusterfs
summary: Cluster Filesystem capable of scaling to several peta-bytes
maintainer: OpenStack Charmers <openstack-charmers@lists.ubuntu.com>
series:
- xenial
- yakkety
- zesty
tags:
- file-servers
- openstack
- storage
description: |
GlusterFS is an open source, distributed file system capable of scaling
to several petabytes (actually, 72 brontobytes!) and handling thousands
of clients. GlusterFS clusters together storage building blocks over
Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory
resources and managing data in a single global namespace. GlusterFS
is based on a stackable user space design and can deliver exceptional
performance for diverse workloads.
extra-bindings:
public:
peers:
server:
interface: gluster-peer
provides:
fuse:
interface: gluster-fuse
nfs:
interface: gluster-nfs
storage:
brick:
type: block
multiple:
range: 0-

View File

@ -1,4 +0,0 @@
metrics:
gb-used:
type: gauge
description: Total number of GB used

View File

@ -1,13 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -1,234 +0,0 @@
#!/usr/bin/python3
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.append('lib')
from charmhelpers.core import hookenv
from charmhelpers.core.hookenv import action_get, action_fail, action_set
from gluster.cli import GlusterCmdException
from charm.gluster import volume
def rebalance_volume():
"""
Start a rebalance volume operation
"""
vol = action_get("volume")
if not vol:
action_fail("volume not specified")
try:
volume.volume_rebalance(vol)
except GlusterCmdException as e:
action_fail("volume rebalance failed with error: {}".format(e))
def enable_bitrot_scan():
"""
Enable bitrot scan
"""
vol = action_get("volume")
if not vol:
action_fail("volume not specified")
try:
volume.volume_enable_bitrot(vol)
except GlusterCmdException as e:
action_fail("enable bitrot failed with error: {}".format(e))
def disable_bitrot_scan():
"""
Disable bitrot scan
"""
vol = action_get("volume")
if not vol:
action_fail("volume not specified")
try:
volume.volume_disable_bitrot(vol)
except GlusterCmdException as e:
action_fail("enable disable failed with error: {}".format(e))
def pause_bitrot_scan():
"""
Pause bitrot scan
"""
vol = action_get("volume")
option = volume.BitrotOption.Scrub(volume.ScrubControl.Pause)
try:
volume.volume_set_bitrot_option(vol, option)
except GlusterCmdException as e:
action_fail("pause bitrot scan failed with error: {}".format(e))
def resume_bitrot_scan():
"""
Resume bitrot scan
"""
vol = action_get("volume")
option = volume.BitrotOption.Scrub(volume.ScrubControl.Resume)
try:
volume.volume_set_bitrot_option(vol, option)
except GlusterCmdException as e:
action_fail("resume bitrot scan failed with error: {}".format(e))
def set_bitrot_scan_frequency():
"""
Set the bitrot scan frequency
"""
vol = action_get("volume")
frequency = action_get("frequency")
option = volume.ScrubSchedule.from_str(frequency)
try:
volume.volume_set_bitrot_option(vol,
volume.BitrotOption.ScrubFrequency(
option))
except GlusterCmdException as e:
action_fail("set bitrot scan frequency failed with error: {}".format(
e))
def set_bitrot_throttle():
"""
Set how aggressive bitrot scanning should be
"""
vol = action_get("volume")
throttle = action_get("throttle")
option = volume.ScrubAggression.from_str(throttle)
try:
volume.volume_set_bitrot_option(vol, volume.BitrotOption.ScrubThrottle(
option))
except GlusterCmdException as e:
action_fail(
"set bitrot throttle failed with error: {}".format(e))
def enable_volume_quota():
"""
Enable quotas on the volume
"""
# Gather our action parameters
vol = action_get("volume")
usage_limit = action_get("usage-limit")
parsed_usage_limit = int(usage_limit)
path = action_get("path")
# Turn quotas on if not already enabled
quotas_enabled = volume.volume_quotas_enabled(vol)
if quotas_enabled.is_err():
action_fail("Enable quota failed: {}".format(quotas_enabled.value))
if not quotas_enabled.value:
try:
volume.volume_enable_quotas(vol)
except GlusterCmdException as e:
action_fail("Enable quotas failed: {}".format(e))
try:
volume.volume_add_quota(vol, path, parsed_usage_limit)
except GlusterCmdException as e:
action_fail("Add quota failed: {}".format(e))
def disable_volume_quota():
"""
Disable quotas on the volume
"""
vol = action_get("volume")
path = action_get("path")
quotas_enabled = volume.volume_quotas_enabled(vol)
if quotas_enabled.is_err():
action_fail("Disable quota failed: {}".format(quotas_enabled.value))
if quotas_enabled.value:
try:
volume.volume_remove_quota(vol, path)
except GlusterCmdException as e:
action_fail("remove quota failed with error: {}".format(e))
def list_volume_quotas():
"""
List quotas on the volume
"""
vol = action_get("volume")
quotas_enabled = volume.volume_quotas_enabled(vol)
if quotas_enabled.is_err():
action_fail("List quota failed: {}".format(quotas_enabled.value))
if quotas_enabled.value:
quotas = volume.quota_list(vol)
if quotas.is_err():
action_fail(
"Failed to get volume quotas: {}".format(quotas.value))
quota_strings = []
for quota in quotas.value:
quota_string = "path:{} limit:{} used:{}".format(
quota.path,
quota.hard_limit,
quota.used)
quota_strings.append(quota_string)
action_set({"quotas": "\n".join(quota_strings)})
def set_volume_options():
"""
Set one or more options on the volume at once
"""
vol = action_get("volume")
# Gather all of the action parameters up at once. We don't know what
# the user wants to change.
options = action_get()
settings = []
for key in options:
if key != "volume":
settings.append(
volume.GlusterOption.from_str(key, options[key]))
volume.volume_set_options(vol, settings)
# Actions to function mapping, to allow for illegal python action names that
# can map to a python function.
ACTIONS = {
"create-volume-quota": enable_volume_quota,
"delete-volume-quota": disable_volume_quota,
"disable-bitrot-scan": disable_bitrot_scan,
"enable-bitrot-scan": enable_bitrot_scan,
"list-volume-quotas": list_volume_quotas,
"pause-bitrot-scan": pause_bitrot_scan,
"rebalance-volume": rebalance_volume,
"resume-bitrot-scan": resume_bitrot_scan,
"set-bitrot-scan-frequency": set_bitrot_scan_frequency,
"set-bitrot-throttle": set_bitrot_throttle,
"set-volume-options": set_volume_options,
}
def main(args):
action_name = os.path.basename(args[0])
try:
action = ACTIONS[action_name]
except KeyError:
return "Action %s undefined" % action_name
else:
try:
action()
except Exception as e:
hookenv.action_fail(str(e))
if __name__ == "__main__":
sys.exit(main(sys.argv))

View File

@ -1,18 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def brick_detached():
# TODO: Do nothing for now
return None

View File

@ -1,31 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from charmhelpers.core.hookenv import ERROR, log, relation_set, unit_public_ip
from charm.gluster.volume import volume_list
def fuse_relation_joined():
# Fuse clients only need one ip address and they can discover the rest
"""
"""
public_addr = unit_public_ip()
volumes = volume_list()
if volumes.is_err():
log("volume list is empty. Unable to complete fuse relation", ERROR)
return
data = {"gluster-public-address": public_addr,
"volumes": " ".join(volumes.value)}
relation_set(relation_settings=data)

View File

@ -1,685 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import subprocess
from typing import Optional, Dict
from charm.gluster.lib import (check_for_new_devices, run_command, Status,
get_brick_list, wait_for_peers)
# from .ctdb import VirtualIp
# from .nfs_relation_joined import nfs_relation_joined
from charm.gluster.peer import peer_probe, Peer
from charm.gluster.volume import (Transport, volume_create_arbiter,
get_local_bricks, Volume,
GlusterOption, SplitBrainPolicy, Toggle,
volume_create_distributed,
volume_create_striped,
volume_create_replicated,
volume_create_striped_replicated,
volume_add_brick, volume_create_erasure,
VolumeType,
volume_enable_bitrot, volume_list,
volume_set_options,
volume_remove_brick, volume_info)
from charmhelpers.contrib.storage.linux.ceph import filesystem_mounted
from charmhelpers.core import hookenv, sysctl
from charmhelpers.core.hookenv import (application_version_set, relation_id)
from charmhelpers.core.hookenv import (config, ERROR, INFO, is_leader,
log, status_set, DEBUG, unit_public_ip)
from charmhelpers.core.host import add_to_updatedb_prunepath
from charmhelpers.fetch import apt_update, add_source, apt_install
from charms.reactive import when, when_not, set_state, remove_state
from gluster.cli import GlusterCmdException
from gluster.cli.parsers import GlusterCmdOutputParseError
from gluster.cli.volume import start
from result import Err, Ok, Result
# from .brick_detached import brick_detached
# from .fuse_relation_joined import fuse_relation_joined
# from .metrics import collect_metrics
# from .server_removed import server_removed
from .upgrade import check_for_upgrade, get_glusterfs_version
"""
#TODO: Deferred
def get_cluster_networks() -> Result: # -> Result<Vec<ctdb.VirtualIp>, str>:
# Return all the virtual ip networks that will be used
cluster_networks = []#: Vec<ctdb.VirtualIp> = Vec.new()
config_value = config["virtual_ip_addresses"]
if config_value is None:
config_value = cluster_networks
virtual_ips = config_value.split(" ")
for vip in virtual_ips:
if len(vip) is 0:
continue
network = ctdb.ipnetwork_from_str(vip)
interface = ctdb.get_interface_for_address(network)
# .ok_or("Failed to find interface for network {}".format(network))
cluster_networks.append(VirtualIp(cidr=network,interface=interface))
return Ok(cluster_networks)
"""
@when_not("installed")
def install():
add_source(config('source'), config('key'))
apt_update(fatal=True)
apt_install(
packages=["ctdb", "nfs-common", "glusterfs-server", "glusterfs-common",
"glusterfs-client"], fatal=True)
set_state("installed")
# @when_file_changed('config.yaml')
def config_changed() -> None:
"""
:return:
"""
r = check_for_new_devices()
if r.is_err():
log("Checking for new devices failed with error: {".format(r.value),
ERROR)
r = check_for_sysctl()
if r.is_err():
log("Setting sysctl's failed with error: {".format(r.value), ERROR)
# If fails we fail the hook
check_for_upgrade()
return
@when('server.bricks.available')
@when_not("volume.created")
def initialize_volume(peer) -> None:
"""
Possibly create a new volume
:param peer:
"""
"""
get_peer_info:
{
'glusterfs-0': {
'address': '192.168.10.1',
'bricks': ['/mnt/vdb1', '/mnt/vdb2']
},
'glusterfs-1': {
'address': '192.168.10.2',
'bricks': ['/mnt/vdb1', '/mnt/vdb2', '/mnt/vdb3']
},
}
"""
if is_leader():
log("I am the leader: {}".format(relation_id()))
log("peer map: {}".format(peer.get_peer_info()))
vol_name = config("volume_name")
try:
vol_info = volume_info(vol_name)
if not vol_info:
log("Creating volume {}".format(vol_name), INFO)
status_set(workload_state="maintenance",
message="Creating volume {}".format(vol_name))
create_result = create_gluster_volume(vol_name,
peer.get_peer_info())
if create_result.is_ok():
if create_result.value == Status.Created:
set_state("volume.created")
else:
log("Volume creation failed with error: {}".format(
create_result.value))
except GlusterCmdException as e:
log("Volume info command failed: {}".format(e))
return
# setup_ctdb()
# setup_samba(volume_name)
return
else:
log("Deferring to the leader for volume modification")
def create_gluster_volume(volume_name: str,
peers: Dict[str, Dict]) -> Result:
"""
Create a new gluster volume with a name and a list of peers
:param volume_name: str. Name of the volume to create
:param peers: List[Peer]. List of the peers to use in this volume
:return:
"""
create_vol = create_volume(peers, None)
if create_vol.is_ok():
if create_vol.value == Status.Created:
log("Create volume succeeded.", INFO)
status_set(workload_state="maintenance",
message="Create volume succeeded")
start_gluster_volume(volume_name)
# Poke the other peers to update their status
set_state("volume.started")
return Ok(Status.Created)
elif create_vol.value == Status.WaitForMorePeers:
log("Waiting for all peers to enter the Peer in Cluster status")
status_set(workload_state="maintenance",
message="Waiting for all peers to enter "
"the \"Peer in Cluster status\"")
return Ok(Status.WaitForMorePeers)
else:
# Status is failed
# What should I return here
return Ok(())
else:
log("Create volume failed with output: {}".format(create_vol.value),
ERROR)
status_set(workload_state="blocked",
message="Create volume failed. Please check "
"juju debug-log.")
return Err(create_vol.value)
def create_volume(peers: Dict[str, Dict],
volume_info: Optional[Volume]) -> Result:
"""
Create a new volume if enough peers are available
:param peers:
:param volume_info:
:return:
"""
cluster_type_config = config("cluster_type")
cluster_type = VolumeType(cluster_type_config.lower())
volume_name = config("volume_name")
replicas = int(config("replication_level"))
extra = int(config("extra_level"))
# Make sure all peers are in the cluster
# spin lock
wait_for_peers()
# Build the brick list
log("get_brick_list: {}".format(peers))
brick_list = get_brick_list(peers, volume_info)
if brick_list.is_err():
if brick_list.value is Status.WaitForMorePeers:
log("Waiting for more peers", INFO)
status_set(workload_state="maintenance",
message="Waiting for more peers")
return Ok(Status.WaitForMorePeers)
elif brick_list.value is Status.InvalidConfig:
return Err(brick_list.value)
else:
# Some other error
return Err("Unknown error in create volume: {}".format(
brick_list.value))
log("Got brick list: {}".format(brick_list.value))
log("Creating volume of type {} with brick list {}".format(
cluster_type, [str(b) for b in brick_list.value]), INFO)
if not brick_list.value:
return Err("No block devices detected")
result = None
if cluster_type is VolumeType.Distribute:
result = volume_create_distributed(
vol=volume_name,
transport=Transport.Tcp,
bricks=brick_list.value,
force=True)
elif cluster_type is VolumeType.Stripe:
result = volume_create_striped(
vol=volume_name,
stripe_count=replicas,
transport=Transport.Tcp,
bricks=brick_list.value,
force=True)
elif cluster_type is VolumeType.Replicate:
result = volume_create_replicated(
vol=volume_name,
replica_count=replicas,
transport=Transport.Tcp,
bricks=brick_list.value,
force=True)
elif cluster_type is VolumeType.Arbiter:
result = volume_create_arbiter(volume_name,
replica_count=replicas,
arbiter_count=extra,
transport=Transport.Tcp,
bricks=brick_list.value,
force=True)
elif cluster_type is VolumeType.StripedAndReplicate:
result = volume_create_striped_replicated(volume_name,
stripe_count=extra,
replica_count=replicas,
transport=Transport.Tcp,
bricks=brick_list.value,
force=True)
elif cluster_type is VolumeType.Disperse:
result = volume_create_erasure(vol=volume_name,
disperse_count=replicas,
redundancy_count=extra,
transport=Transport.Tcp,
bricks=brick_list.value,
force=True)
elif cluster_type is VolumeType.DistributedAndStripe:
result = volume_create_striped(vol=volume_name,
stripe_count=replicas,
transport=Transport.Tcp,
bricks=brick_list.value, force=True)
elif cluster_type is VolumeType.DistributedAndReplicate:
result = volume_create_replicated(vol=volume_name,
replica_count=replicas,
transport=Transport.Tcp,
bricks=brick_list.value, force=True)
elif cluster_type is VolumeType.DistributedAndStripedAndReplicate:
result = volume_create_striped_replicated(vol=volume_name,
stripe_count=extra,
replica_count=replicas,
transport=Transport.Tcp,
bricks=brick_list.value,
force=True)
elif cluster_type is VolumeType.DistributedAndDisperse:
result = volume_create_erasure(
vol=volume_name,
disperse_count=extra,
redundancy_count=None,
transport=Transport.Tcp,
bricks=brick_list.value,
force=True)
# Check our result
if result.is_err():
log("Failed to create volume: {}".format(result.value), ERROR)
return Err(Status.FailedToCreate)
# Everything is good
return Ok(Status.Created)
@when('server.bricks.available')
@when('volume.created')
def check_for_expansion(peer) -> None:
"""
Possibly expand an existing volume
:param peer:
"""
if is_leader():
log("I am the leader: {}".format(relation_id()))
vol_name = config("volume_name")
try:
vol_info = volume_info(vol_name)
if vol_info:
log("Expanding volume {}".format(vol_name), INFO)
status_set(workload_state="maintenance",
message="Expanding volume {}".format(vol_name))
expand_vol = expand_volume(peer.get_peer_info(), vol_info[0])
if expand_vol.is_ok():
if expand_vol.value == Status.Expanded:
log("Expand volume succeeded.", INFO)
status_set(workload_state="active",
message="Expand volume succeeded.")
# Poke the other peers to update their status
remove_state("volume.needs.expansion")
return
else:
# Ensure the cluster is mounted
# setup_ctdb()
# setup_samba(volume_name)
return
log("Expand volume failed with output: {}".format(
expand_vol.value), ERROR)
status_set(workload_state="blocked",
message="Expand volume failed. Please check juju "
"debug-log.")
return
except GlusterCmdException as e:
log("Volume info command failed: {}".format(e))
return
def expand_volume(peers: Dict[str, Dict],
vol_info: Optional[Volume]) -> Result:
"""
Expands the volume by X servers+bricks
Adds bricks and then runs a rebalance
:param peers:
:param vol_info:
:return:
"""
volume_name = config("volume_name")
# Are there new peers
log("Checking for new peers to expand the volume named {}".format(
volume_name))
# Build the brick list
brick_list = get_brick_list(peers, vol_info)
if brick_list.is_ok():
if brick_list.value:
log("Expanding volume with brick list: {}".format(
[str(b) for b in brick_list.value]), INFO)
try:
volume_add_brick(volume_name, brick_list.value, True)
return Ok(Status.Expanded)
except GlusterCmdException as e:
return Err("Adding brick to volume failed: {}".format(e))
return Ok(Status.InvalidConfig)
else:
if brick_list.value is Status.WaitForMorePeers:
log("Waiting for more peers", INFO)
return Ok(Status.WaitForMorePeers)
elif brick_list.value is Status.InvalidConfig:
return Err(brick_list.value)
else:
# Some other error
return Err(
"Unknown error in expand volume: {}".format(brick_list.value))
"""
# TODO: Deferred
# Add all the peers in the gluster cluster to the ctdb cluster
def setup_ctdb() -> Result:
if config["virtual_ip_addresses"] is None:
# virtual_ip_addresses isn't set. Skip setting ctdb up
return Ok(())
log("setting up ctdb")
peers = peer_list()
log("Got ctdb peer list: {}".format(peers))
cluster_addresses: Vec<IpAddr> = []
for peer in peers:
address = IpAddr.from_str(peer.hostname).map_err(|e| e)
cluster_addresses.append(address)
log("writing /etc/default/ctdb")
ctdb_conf = File.create("/etc/default/ctdb").map_err(|e| e)
ctdb.render_ctdb_configuration(ctdb_conf).map_err(|e| e)
cluster_networks = get_cluster_networks()
log("writing /etc/ctdb/public_addresses")
public_addresses =
File.create("/etc/ctdb/public_addresses").map_err(|e| e)
ctdb.render_ctdb_public_addresses(public_addresses, cluster_networks)
.map_err(|e| e)
log("writing /etc/ctdb/nodes")
cluster_nodes = File.create("/etc/ctdb/nodes").map_err(|e| e)
ctdb.render_ctdb_cluster_nodes(cluster_nodes, cluster_addresses)
.map_err(|e| e)
# Start the ctdb service
log("Starting ctdb")
apt.service_start("ctdb")
return Ok(())
"""
def shrink_volume(peer: Peer, vol_info: Optional[Volume]):
"""
Shrink a volume. This needs to be done in replica set so it's a bit
tricky to get right.
:param peer: Peer to remove
:param vol_info: Optional[Volume]
"""
volume_name = config("volume_name")
log("Shrinking volume named {}".format(volume_name), INFO)
peers = [peer]
# Build the brick list
brick_list = get_brick_list(peers, vol_info)
if brick_list.is_ok():
log("Shrinking volume with brick list: {}".format(
[str(b) for b in brick_list.value]), INFO)
return volume_remove_brick(volume_name, brick_list.value, True)
else:
if brick_list.value == Status.WaitForMorePeers:
log("Waiting for more peers", INFO)
return Ok(0)
elif brick_list.value == Status.InvalidConfig:
return Err(brick_list.value)
else:
# Some other error
return Err("Unknown error in shrink volume: {}".format(
brick_list.value))
@when('volume.started')
@when_not("volume.configured")
def set_volume_options() -> None:
"""
Set any options needed on the volume.
:return:
"""
if is_leader():
status_set(workload_state="maintenance",
message="Setting volume options")
volume_name = config('volume_name')
settings = [
# Starting in gluster 3.8 NFS is disabled in favor of ganesha.
# I'd like to stick with the legacy version a bit longer.
GlusterOption(option=GlusterOption.NfsDisable, value=Toggle.Off),
GlusterOption(option=GlusterOption.DiagnosticsLatencyMeasurement,
value=Toggle.On),
GlusterOption(option=GlusterOption.DiagnosticsCountFopHits,
value=Toggle.On),
# Dump FOP stats every 5 seconds.
# NOTE: On slow main drives this can severely impact them
GlusterOption(option=GlusterOption.DiagnosticsFopSampleInterval,
value=5),
GlusterOption(option=GlusterOption.DiagnosticsStatsDumpInterval,
value=30),
# 1HR DNS timeout
GlusterOption(option=GlusterOption.DiagnosticsStatsDnscacheTtlSec,
value=3600),
# Set parallel-readdir on. This has a very nice performance
# benefit as the number of bricks/directories grows
GlusterOption(option=GlusterOption.PerformanceParallelReadDir,
value=Toggle.On),
GlusterOption(option=GlusterOption.PerformanceReadDirAhead,
value=Toggle.On),
# Start with 20MB and go from there
GlusterOption(
option=GlusterOption.PerformanceReadDirAheadCacheLimit,
value=1024 * 1024 * 20)]
# Set the split brain policy if requested
splitbrain_policy = config("splitbrain_policy")
if splitbrain_policy:
# config.yaml has a default here. Should always have a value
policy = SplitBrainPolicy(splitbrain_policy)
if policy:
log("Setting split brain policy to: {}".format(
splitbrain_policy), DEBUG)
settings.append(
GlusterOption(option=GlusterOption.FavoriteChildPolicy,
value=policy))
# Set all the volume options
option_set_result = volume_set_options(volume_name, settings)
# The has a default. Should be safe
bitrot_config = bool(config("bitrot_detection"))
if bitrot_config:
log("Enabling bitrot detection", DEBUG)
status_set(workload_state="maintenance",
message="Enabling bitrot detection.")
try:
volume_enable_bitrot(volume_name)
except GlusterCmdException as e:
log("Enabling bitrot failed with error: {}".format(e), ERROR)
# Tell reactive we're all set here
status_set(workload_state="active",
message="")
if option_set_result.is_err():
log("Setting volume options failed with error(s): {}".format(
option_set_result.value), ERROR)
set_state("volume.configured")
# Display the status of the volume on the juju cli
update_status()
def start_gluster_volume(volume_name: str) -> None:
"""
Startup the gluster volume
:param volume_name: str. volume name to start
:return: Result
"""
try:
start(volume_name, False)
log("Starting volume succeeded.", INFO)
status_set(workload_state="active",
message="Starting volume succeeded.")
return Ok(())
except GlusterCmdException as e:
log("Start volume failed with output: {}".format(e), ERROR)
status_set(workload_state="blocked",
message="Start volume failed. Please check juju debug-log.")
def check_for_sysctl() -> Result:
"""
Check to see if there's sysctl changes that need to be applied
:return: Result
"""
config = hookenv.config()
if config.changed("sysctl"):
config_path = os.path.join(os.sep, "etc", "sysctl.d",
"50-gluster-charm.conf")
sysctl_dict = config["sysctl"]
if sysctl_dict is not None:
sysctl.create(sysctl_dict, config_path)
return Ok(())
@when('server.connected')
def server_connected(peer) -> None:
"""
The peer.available state is set when there are one or more peer units
that have joined.
:return:
"""
update_status()
bricks = check_for_new_devices()
if bricks.is_ok():
log('Reporting my bricks {} to the leader'.format(bricks.value))
peer.set_bricks(bricks=bricks.value)
if not is_leader():
log('Reporting my public address {} to the leader'.format(
unit_public_ip()))
peer.set_address(address_type='public', address=unit_public_ip())
return
# I am the leader
log('Leader probing peers')
probed_units = []
try:
p = hookenv.leader_get('probed-units')
if p:
probed_units = json.loads(p)
except json.decoder.JSONDecodeError as e:
log("json decoder failed for {}: {}".format(e.doc, e.msg))
log("probed_units: {}".format(probed_units))
peer_info = peer.get_peer_info()
for unit in peer_info:
if unit in probed_units:
continue
address = peer_info[unit]['address']
log('probing host {} at {}'.format(unit, address))
status_set('maintenance', 'Probing peer {}'.format(unit))
try:
peer_probe(address)
probed_units.append(unit)
except (GlusterCmdException, GlusterCmdOutputParseError):
log('Error probing host {}: {}'.format(unit, address), ERROR)
continue
log('successfully probed {}: {}'.format(unit, address), DEBUG)
settings = {'probed-units': json.dumps(probed_units)}
hookenv.leader_set(settings)
status_set('maintenance', '')
"""
def resolve_first_vip_to_dns() -> Result:
cluster_networks = get_cluster_networks()
if cluster_networks.is_ok():
match cluster_networks.first() {
Some(cluster_network) => {
match cluster_network.cidr {
IpNetwork.V4(ref v4_network) => {
# Resolve the ipv4 address back to a dns string
Ok(address_name(.std.net.IpAddr.V4(v4_network.ip())))
}
IpNetwork.V6(ref v6_network) => {
# Resolve the ipv6 address back to a dns string
Ok(address_name(.std.net.IpAddr.V6(v6_network.ip())))
None => {
# No vips were set
return Err("virtual_ip_addresses has no addresses set")
"""
@when('installed')
@when_not('glusterfs.mounted')
def mount_cluster() -> None:
"""
Mount the cluster at /mnt/glusterfs using fuse
:return: Result. Ok or Err depending on the outcome of mount
"""
log("Checking if cluster mount needed")
volume_name = config('volume_name')
volumes = volume_list()
if not os.path.exists("/mnt/glusterfs"):
os.makedirs("/mnt/glusterfs")
if not filesystem_mounted("/mnt/glusterfs"):
if volume_name in volumes:
arg_list = ["-t", "glusterfs", "localhost:/{}".format(volume_name),
"/mnt/glusterfs"]
try:
run_command(command="mount", arg_list=arg_list,
script_mode=False)
log("Removing /mnt/glusterfs from updatedb", INFO)
add_to_updatedb_prunepath("/mnt/glusterfs")
set_state("glusterfs.mounted")
update_status()
return
except subprocess.CalledProcessError as e:
log("mount failed with error: "
"stdout: {} stderr: {}".format(e.stdout, e.stderr))
return
def update_status() -> None:
"""
Update the juju status information
:return: Result with Ok or Err.
"""
try:
version = get_glusterfs_version()
application_version_set("{}".format(version))
except KeyError:
log("glusterfs-server not installed yet. Cannot discover version",
DEBUG)
return
volume_name = config("volume_name")
local_bricks = get_local_bricks(volume_name)
if local_bricks.is_ok():
if local_bricks.value:
status_set(workload_state="active",
message="Unit is ready ({} bricks)".format(
len(local_bricks.value)))
else:
status_set(
workload_state="blocked",
message='No block devices detected using '
'current configuration')
return
else:
status_set(workload_state="blocked",
message="No bricks found")
return

View File

@ -1,30 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
def nfs_relation_joined() -> Result<(), String>
config_value = juju::config_get("virtual_ip_addresses")
volumes = volume_list()
if Some(vols) = volumes:
relation_set("volumes", " ".join(vols))
# virtual_ip_addresses isn't set. Handing back my public address
if not config_value.is_some():
public_addr = juju::unit_get_public_addr()
relation_set("gluster-public-address", public_addr)
else:
# virtual_ip_addresses is set. Handing back the DNS resolved address
dns_name = resolve_first_vip_to_dns()?
relation_set("gluster-public-address", dns_name)
"""

View File

@ -1,22 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from charmhelpers.core.hookenv import INFO, log, unit_private_ip
def server_removed():
"""
Remove a server from the cluster
"""
private_address = unit_private_ip()
log("Removing server: {}".format(private_address), INFO)

View File

@ -1,307 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import random
import time
import uuid
from typing import Optional
import apt
import apt_pkg
from charm.gluster import peer, volume
from charm.gluster.apt import get_candidate_package_version
from charmhelpers.cli import hookenv
from charmhelpers.core.hookenv import config, log, status_set, ERROR
from charmhelpers.core.host import service_start, service_stop
from charmhelpers.fetch import apt_install, add_source, apt_update
from gluster.cli.parsers import GlusterCmdOutputParseError
from result import Err, Ok, Result
def get_glusterfs_version() -> str:
"""
Get the current glusterfs version that is installed
:return: Result. Ok(str) or Err(str)
"""
try:
cache = apt.Cache()
version_str = cache['glusterfs-server'].installed.version
return version_str
except KeyError:
raise
def get_local_uuid() -> Result:
"""
File looks like this:
UUID=30602134-698f-4e53-8503-163e175aea85
operating-version=30800
:return: Result with Ok or Err.
"""
with open("/var/lib/glusterd/glusterd.info", "r") as f:
lines = f.readlines()
for line in lines:
if "UUID" in line:
parts = line.split("=")
gluster_uuid = uuid.UUID(parts[1].strip())
return Ok(gluster_uuid)
return Err("Unable to find UUID")
def roll_cluster(new_version: str) -> Result:
"""
Edge cases:
1. Previous node dies on upgrade, can we retry
This is tricky to get right so here's what we're going to do.
:param new_version: str of the version to upgrade to
There's 2 possible cases: Either I'm first in line or not.
If I'm not first in line I'll wait a random time between 5-30 seconds
and test to see if the previous peer is upgraded yet.
:param new_version: str. new version to upgrade to
:return: Result with Ok or Err.
"""
log("roll_cluster called with {}".format(new_version))
volume_name = config("volume_name")
my_uuid = get_local_uuid()
if my_uuid.is_err():
return Err(my_uuid.value)
# volume_name always has a default
try:
volume_bricks = volume.volume_info(volume_name)
peer_list = volume_bricks.value.bricks.peers
log("peer_list: {}".format(peer_list))
# Sort by UUID
peer_list.sort()
# We find our position by UUID
position = [i for i, x in enumerate(peer_list) if x == my_uuid.value]
if len(position) == 0:
return Err("Unable to determine upgrade position")
log("upgrade position: {}".format(position))
if position[0] == 0:
# I'm first! Roll
# First set a key to inform others I'm about to roll
lock_and_roll(my_uuid.value, new_version)
else:
# Check if the previous node has finished
status_set(workload_state="waiting",
message="Waiting on {} to finish upgrading".format(
peer_list[position[0] - 1]))
wait_on_previous_node(peer_list[position[0] - 1], new_version)
lock_and_roll(my_uuid.value, new_version)
except GlusterCmdOutputParseError as e:
return Err(e)
return Ok(())
def upgrade_peer(new_version: str) -> Result:
"""
Upgrade a specific peer
:param new_version: str. new version to upgrade to
:return: Result with Ok or Err.
"""
from .main import update_status
current_version = get_glusterfs_version()
status_set(workload_state="maintenance", message="Upgrading peer")
log("Current ceph version is {}".format(current_version))
log("Upgrading to: {}".format(new_version))
service_stop("glusterfs-server")
apt_install(["glusterfs-server", "glusterfs-common", "glusterfs-client"])
service_start("glusterfs-server")
update_status()
return Ok(())
def lock_and_roll(my_uuid: uuid.UUID, version: str) -> Result:
"""
Lock and prevent others from upgrading and upgrade this particular peer
:param my_uuid: uuid.UUID of the peer to upgrade
:param version: str. Version to upgrade to
:return: Result with Ok or Err
"""
start_timestamp = time.time()
log("gluster_key_set {}_{}_start {}".format(my_uuid, version,
start_timestamp))
gluster_key_set("{}_{}_start".format(my_uuid, version), start_timestamp)
log("Rolling")
# This should be quick
upgrade_peer(version)
log("Done")
stop_timestamp = time.time()
# Set a key to inform others I am finished
log("gluster_key_set {}_{}_done {}".format(my_uuid, version,
stop_timestamp))
gluster_key_set("{}_{}_done".format(my_uuid, version), stop_timestamp)
return Ok(())
def gluster_key_get(key: str) -> Optional[float]:
"""
Get an upgrade key from the gluster local mount
:param key: str. Name of key to get
:return: Optional[float] with a timestamp
"""
upgrade_key = os.path.join(os.sep, "mnt", "glusterfs", ".upgrade", key)
if not os.path.exists(upgrade_key):
return None
try:
with open(upgrade_key, "r") as f:
s = f.readlines()
log("gluster_key_get read {} bytes".format(len(s)))
try:
decoded = json.loads(s)
return float(decoded)
except ValueError:
log("Failed to decode json file in "
"gluster_key_get(): {}".format(s))
return None
except IOError as e:
log("gluster_key_get failed to read file /mnt/glusterfs/.upgraded/.{} "
"Error: {}".format(key, e.strerror))
return None
def gluster_key_set(key: str, timestamp: float) -> Result:
"""
Set a key and a timestamp on the local glusterfs mount
:param key: str. Name of the key
:param timestamp: float. Timestamp
:return: Result with Ok or Err
"""
p = os.path.join(os.sep, "mnt", "glusterfs", ".upgrade")
if os.path.exists(p):
os.makedirs(p)
try:
with open(os.path.join(p, key), "w") as file:
encoded = json.dumps(timestamp)
file.write(encoded)
return Ok(())
except IOError as e:
return Err(e.strerror)
def gluster_key_exists(key: str) -> bool:
location = "/mnt/glusterfs/.upgrade/{}".format(key)
return os.path.exists(location)
def wait_on_previous_node(previous_node: peer.Peer, version: str) -> Result:
"""
Wait on a previous node to finish upgrading
:param previous_node: peer.Peer to wait on
:param version: str. Version we're upgrading to
:return: Result with Ok or Err
"""
log("Previous node is: {}".format(previous_node))
previous_node_finished = gluster_key_exists(
"{}_{}_done".format(previous_node.uuid, version))
while not previous_node_finished:
log("{} is not finished. Waiting".format(previous_node.uuid))
# Has this node been trying to upgrade for longer than
# 10 minutes
# If so then move on and consider that node dead.
# NOTE: This assumes the clusters clocks are somewhat accurate
# If the hosts clock is really far off it may cause it to skip
# the previous node even though it shouldn't.
current_timestamp = time.time()
previous_node_start_time = gluster_key_get("{}_{}_start".format(
previous_node.uuid, version))
if previous_node_start_time is not None:
if float(current_timestamp - 600) > previous_node_start_time:
# Previous node is probably dead. Lets move on
if previous_node_start_time is not None:
log("Waited 10 mins on node {}. "
"current time: {} > "
"previous node start time: {} "
"Moving on".format(previous_node.uuid,
(current_timestamp - 600),
previous_node_start_time))
return Ok(())
else:
# I have to wait. Sleep a random amount of time and then
# check if I can lock,upgrade and roll.
wait_time = random.randrange(5, 30)
log("waiting for {} seconds".format(wait_time))
time.sleep(wait_time)
previous_node_finished = gluster_key_exists(
"{}_{}_done".format(previous_node.uuid, version))
else:
# TODO: There is no previous start time. What should we do?
return Ok(())
def check_for_upgrade() -> Result:
"""
If the config has changed this will initiated a rolling upgrade
:return:
"""
config = hookenv.config()
if not config.changed("source"):
# No upgrade requested
log("No upgrade requested")
return Ok(())
log("Getting current_version")
current_version = get_glusterfs_version()
log("Adding new source line")
source = config["source"]
if not source:
# No upgrade requested
log("Source not set. Cannot continue with upgrade")
return Ok(())
add_source(source)
log("Calling apt update")
apt_update()
log("Getting proposed_version")
apt_pkg.init_system()
proposed_version = get_candidate_package_version("glusterfs-server")
if proposed_version.is_err():
return Err(proposed_version.value)
version_compare = apt_pkg.version_compare(a=proposed_version.value,
b=current_version)
# Using semantic versioning if the new version is greater
# than we allow the upgrade
if version_compare > 0:
log("current_version: {}".format(current_version))
log("new_version: {}".format(proposed_version.value))
log("{} to {} is a valid upgrade path. Proceeding.".format(
current_version, proposed_version.value))
return roll_cluster(proposed_version.value)
else:
# Log a helpful error message
log("Invalid upgrade path from {} to {}. The new version needs to be \
greater than the old version".format(
current_version, proposed_version.value), ERROR)
return Ok(())

View File

@ -1,32 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
coverage>=3.6
mock>=1.2
flake8>=2.2.4,<=2.4.1
os-testr>=0.4.1
charm-tools>=2.0.0
requests==2.6.0
# amulet deployment helpers
bzr+lp:charm-helpers#egg=charmhelpers
# BEGIN: Amulet OpenStack Charm Helper Requirements
# Liberty client lower constraints
amulet>=1.14.3,<2.0
bundletester>=0.6.1,<1.0
aodhclient>=0.1.0
python-barbicanclient>=4.0.1
python-ceilometerclient>=1.5.0
python-cinderclient>=1.4.0
python-designateclient>=1.5
python-glanceclient>=1.1.0
python-heatclient>=0.8.0
python-keystoneclient>=1.7.1
python-neutronclient>=3.1.0
python-novaclient>=2.30.1
python-openstackclient>=1.7.0
python-swiftclient>=2.6.0
pika>=0.10.0,<1.0
distro-info
# END: Amulet OpenStack Charm Helper Requirements
# NOTE: workaround for 14.04 pip/tox
pytz

View File

@ -1,10 +0,0 @@
# Overview
This directory provides Amulet tests to verify basic deployment functionality
from the perspective of this charm, its requirements and its features, as
exercised in a subset of the full OpenStack deployment test bundle topology.
For full details on functional testing of OpenStack charms please refer to
the [functional testing](http://docs.openstack.org/developer/charm-guide/testing.html#functional-testing)
section of the OpenStack Charm Guide.

View File

@ -1,132 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import amulet
from charmhelpers.contrib.openstack.amulet.deployment import (
OpenStackAmuletDeployment
)
from charmhelpers.contrib.openstack.amulet.utils import (
OpenStackAmuletUtils,
DEBUG,
)
# Use DEBUG to turn on debug logging
u = OpenStackAmuletUtils(DEBUG)
class GlusterFSBasicDeployment(OpenStackAmuletDeployment):
"""Amulet tests on a basic glusterfs deployment."""
def __init__(self, series, openstack=None, source=None, stable=False):
"""Deploy the entire test environment."""
super(GlusterFSBasicDeployment, self).__init__(series, openstack,
source, stable)
self._add_services()
self._configure_services()
self._deploy()
u.log.info('Waiting on extended status checks...')
self._auto_wait_for_status(exclude_services=[])
self.d.sentry.wait()
self._initialize_tests()
def _add_services(self):
"""Add services
Add the services that we're testing, where glusterfs is local,
and the rest of the service are from lp branches that are
compatible with the local charm (e.g. stable or next).
"""
super(GlusterFSBasicDeployment, self)._add_services(
this_service={'name': 'glusterfs', 'units': 3},
no_origin=['glusterfs'], other_services=[])
def _configure_services(self):
"""Configure all of the services."""
configs = {
'glusterfs': {
'volume_name': 'test',
'brick_devices': '/dev/vdb',
'ephemeral_unmount': '/mnt',
},
}
super(GlusterFSBasicDeployment, self)._configure_services(configs)
def _initialize_tests(self):
"""Perform final initialization before tests get run."""
# Access the sentries for inspecting service units
self.gluster0_sentry = self.d.sentry['glusterfs'][0]
self.gluster1_sentry = self.d.sentry['glusterfs'][1]
self.gluster2_sentry = self.d.sentry['glusterfs'][2]
u.log.debug('openstack release val: {}'.format(
self._get_openstack_release()))
u.log.debug('openstack release str: {}'.format(
self._get_openstack_release_string()))
def test_100_gluster_processes(self):
"""Verify that the expected service processes are running
on each gluster unit."""
# Process name and quantity of processes to expect on each unit
gluster_processes = {
'glusterd': 1,
'glusterfsd': 1,
}
# Units with process names and PID quantities expected
expected_processes = {
self.gluster0_sentry: gluster_processes,
self.gluster1_sentry: gluster_processes,
self.gluster2_sentry: gluster_processes
}
actual_pids = u.get_unit_process_ids(expected_processes)
ret = u.validate_unit_process_ids(expected_processes, actual_pids)
if ret:
amulet.raise_status(amulet.FAIL, msg=ret)
def test_102_services(self):
"""Verify the expected services are running on the corresponding
service units."""
u.log.debug('Checking system services on units...')
glusterfs_svcs = ['glusterfs-server']
service_names = {
self.gluster0_sentry: glusterfs_svcs,
}
ret = u.validate_services_by_name(service_names)
if ret:
amulet.raise_status(amulet.FAIL, msg=ret)
u.log.debug('OK')
def test_400_gluster_cmds_exit_zero(self):
"""Check basic functionality of gluster cli commands against
one gluster unit."""
sentry_units = [
self.gluster0_sentry,
]
commands = [
'sudo gluster vol status test',
'sudo gluster vol info test',
]
ret = u.check_commands_on_units(commands, sentry_units)
if ret:
amulet.raise_status(amulet.FAIL, msg=ret)

View File

@ -1,9 +0,0 @@
#!/usr/bin/env python
"""Amulet tests on a basic aodh deployment on xenial-ocata."""
from basic_deployment import GlusterFSBasicDeployment
if __name__ == '__main__':
deployment = GlusterFSBasicDeployment(series='xenial')
deployment.run_tests()

View File

@ -1,10 +0,0 @@
#!/usr/bin/env python
"""Amulet tests on a basic aodh deployment on xenial-ocata."""
from basic_deployment import GlusterFSBasicDeployment
if __name__ == '__main__':
deployment = GlusterFSBasicDeployment(series='xenial',
openstack='cloud:xenial-pike')
deployment.run_tests()

View File

@ -1,17 +0,0 @@
# Bootstrap the model if necessary.
bootstrap: True
# Re-use bootstrap node.
reset: True
# Use tox/requirements to drive the venv instead of bundletester's venv feature.
virtualenv: False
# Leave makefile empty, otherwise unit/lint tests will rerun ahead of amulet.
makefile: []
# Do not specify juju PPA sources. Juju is presumed to be pre-installed
# and configured in all test runner environments.
#sources:
# Do not specify or rely on system packages.
#packages:
# Do not specify python packages here. Use test-requirements.txt
# and tox instead. ie. The venv is constructed before bundletester
# is invoked.
#python-packages:

View File

@ -1,53 +0,0 @@
# Source charm: ./src/tox.ini
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos.
[tox]
envlist = pep8
skipsdist = True
[testenv]
setenv = VIRTUAL_ENV={envdir}
PYTHONHASHSEED=0
AMULET_SETUP_TIMEOUT=2700
whitelist_externals = juju
passenv = HOME TERM AMULET_* CS_API_URL
deps = -r{toxinidir}/test-requirements.txt
install_command =
pip install --allow-unverified python-apt {opts} {packages}
[testenv:pep8]
basepython = python2.7
commands = charm-proof
[testenv:func27-noop]
# DRY RUN - For Debug
basepython = python2.7
commands =
bundletester -vl DEBUG -r json -o func-results.json --test-pattern "gate-*" -n --no-destroy
[testenv:func27]
# Run all gate tests which are +x (expected to always pass)
basepython = python2.7
commands =
bundletester -vl DEBUG -r json -o func-results.json --test-pattern "gate-*" --no-destroy
[testenv:func27-smoke]
# Run a specific test as an Amulet smoke test (expected to always pass)
basepython = python2.7
commands =
bundletester -vl DEBUG -r json -o func-results.json gate-basic-xenial-mitaka --no-destroy
[testenv:func27-dfs]
# Run all deploy-from-source tests which are +x (may not always pass!)
basepython = python2.7
commands =
bundletester -vl DEBUG -r json -o func-results.json --test-pattern "dfs-*" --no-destroy
[testenv:func27-dev]
# Run all development test targets which are +x (may not always pass!)
basepython = python2.7
commands =
bundletester -vl DEBUG -r json -o func-results.json --test-pattern "dev-*" --no-destroy
[testenv:venv]
commands = {posargs}

View File

@ -1,3 +0,0 @@
result
pyudev
glustercli

View File

@ -1,12 +0,0 @@
# Unit test requirements
flake8>=2.2.4,<=2.4.1
os-testr>=0.4.1
charms.reactive
mock>=1.2
coverage>=3.6
git+https://github.com/openstack/charms.openstack#egg=charms.openstack
pyudev>=0.16
result>=0.2.2
netifaces>=0.10.4
dnspython3>=1.15.0
glustercli>=0.3

55
tox.ini
View File

@ -1,55 +0,0 @@
# Source charm: ./tox.ini
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos.
[tox]
skipsdist = True
envlist = pep8,py34,py35
skip_missing_interpreters = True
[testenv]
setenv = VIRTUAL_ENV={envdir}
PYTHONHASHSEED=0
TERM=linux
LAYER_PATH={toxinidir}/layers
INTERFACE_PATH={toxinidir}/interfaces
JUJU_REPOSITORY={toxinidir}/build
passenv = http_proxy https_proxy
install_command =
pip install {opts} {packages}
deps =
-r{toxinidir}/requirements.txt
[testenv:build]
basepython = python2.7
commands =
charm-build --log-level DEBUG -o {toxinidir}/build src {posargs}
[testenv:py27]
basepython = python2.7
# Reactive source charms are Python3-only, but a py27 unit test target
# is required by OpenStack Governance. Remove this shim as soon as
# permitted. http://governance.openstack.org/reference/cti/python_cti.html
whitelist_externals = true
commands = true
[testenv:py34]
basepython = python3.4
deps = -r{toxinidir}/test-requirements.txt
commands = ostestr {posargs}
[testenv:py35]
basepython = python3.5
deps = -r{toxinidir}/test-requirements.txt
commands = ostestr {posargs}
[testenv:pep8]
basepython = python3.5
deps = -r{toxinidir}/test-requirements.txt
commands = flake8 {posargs} src unit_tests
[testenv:venv]
commands = {posargs}
[flake8]
# E402 ignore necessary for path append before sys module import in actions
ignore = E402

View File

@ -1,18 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
sys.path.append('src')
sys.path.append('src/lib')

View File

@ -1,13 +0,0 @@
# /etc/fstab=static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/xubuntu--vg--ssd-root / ext4 noatime,errors=remount-ro 0 1
# /boot was on /dev/sda1 during installation
UUID=378f3c86-b21a-4172-832d-e2b3d4bc7511 /boot ext2 defaults 0 2
/dev/mapper/xubuntu--vg--ssd-swap_1 none swap sw 0 0
UUID=be8a49b9-91a3-48df-b91b-20a0b409ba0f /mnt/raid ext4 errors=remount-ro,user 0 1
# tmpfs /tmp tmpfs rw,nosuid,nodev

View File

@ -1,28 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<peerStatus>
<peer>
<uuid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8</uuid>
<hostname>172.31.12.7</hostname>
<hostnames>
<hostname>172.31.12.7</hostname>
</hostnames>
<connected>1</connected>
<state>3</state>
<stateStr>Peer in Cluster</stateStr>
</peer>
<peer>
<uuid>15af92ad-ae64-4aba-89db-73730f2ca6ec</uuid>
<hostname>172.31.21.242</hostname>
<hostnames>
<hostname>172.31.21.242</hostname>
</hostnames>
<connected>1</connected>
<state>3</state>
<stateStr>Peer in Cluster</stateStr>
</peer>
</peerStatus>
</cliOutput>

View File

@ -1,33 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<peerStatus>
<peer>
<uuid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8</uuid>
<hostname>172.31.12.7</hostname>
<hostnames>
<hostname>172.31.12.7</hostname>
</hostnames>
<connected>1</connected>
<state>3</state>
<stateStr>Peer in Cluster</stateStr>
</peer>
<peer>
<uuid>15af92ad-ae64-4aba-89db-73730f2ca6ec</uuid>
<hostname>172.31.21.242</hostname>
<hostnames>
<hostname>172.31.21.242</hostname>
</hostnames>
<connected>1</connected>
<state>3</state>
<stateStr>Peer in Cluster</stateStr>
</peer>
<peer>
<uuid>cebf02bb-a304-4058-986e-375e2e1e5313</uuid>
<hostname>localhost</hostname>
<connected>1</connected>
</peer>
</peerStatus>
</cliOutput>

View File

@ -1,28 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volQuota>
<limit>
<path>/</path>
<hard_limit>10240</hard_limit>
<soft_limit_percent>80%</soft_limit_percent>
<soft_limit_value>8192</soft_limit_value>
<used_space>0</used_space>
<avail_space>10240</avail_space>
<sl_exceeded>No</sl_exceeded>
<hl_exceeded>No</hl_exceeded>
</limit>
<limit>
<path>/test2</path>
<hard_limit>10240</hard_limit>
<soft_limit_percent>80%</soft_limit_percent>
<soft_limit_value>8192</soft_limit_value>
<used_space>0</used_space>
<avail_space>10240</avail_space>
<sl_exceeded>No</sl_exceeded>
<hl_exceeded>No</hl_exceeded>
</limit>
</volQuota>
</cliOutput>

View File

@ -1,54 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import unittest
import mock
from lib.charm.gluster.volume import Quota
from reactive import actions
from result import Ok
mock_apt = mock.MagicMock()
sys.modules['apt'] = mock_apt
mock_apt.apt_pkg = mock.MagicMock()
class Test(unittest.TestCase):
@mock.patch('reactive.actions.volume')
@mock.patch('reactive.actions.action_get')
@mock.patch('reactive.actions.action_set')
def testListVolQuotas(self, _action_set, _action_get,
_volume):
_volume.quota_list.return_value = Ok(
[Quota(path="/test1",
used=10,
avail=90,
hard_limit=90,
soft_limit=80,
hard_limit_exceeded=False,
soft_limit_exceeded=False,
soft_limit_percentage="80%")])
_volume.volume_quotas_enabled.return_value = Ok(True)
_action_get.return_value = "test"
actions.list_volume_quotas()
_action_set.assert_called_with(
{"quotas": "path:/test1 limit:90 used:10"})
def testSetVolOptions(self):
pass
if __name__ == "__main__":
unittest.main()

View File

@ -1,82 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import mock
from result import Ok
from lib.charm.gluster import block
class Test(unittest.TestCase):
def testGetDeviceInfo(self):
pass
@mock.patch('lib.charm.gluster.block.scan_devices')
@mock.patch('lib.charm.gluster.block.storage_get')
@mock.patch('lib.charm.gluster.block.storage_list')
@mock.patch('lib.charm.gluster.block.log')
def testGetJujuBricks(self, _log, _storage_list, _storage_get,
_scan_devices):
_storage_list.return_value = ['data/0', 'data/1', 'data/2']
_storage_get.side_effect = lambda x, y: "/dev/{}".format(
y.split('/')[1])
_scan_devices.return_value = Ok(["/dev/0", "/dev/1", "/dev/2"])
bricks = block.get_juju_bricks()
self.assertTrue(bricks.is_ok())
self.assertListEqual(["/dev/0", "/dev/1", "/dev/2"], bricks.value)
@mock.patch('lib.charm.gluster.block.scan_devices')
@mock.patch('lib.charm.gluster.block.config')
@mock.patch('lib.charm.gluster.block.log')
def testGetManualBricks(self, _log, _config, _scan_devices):
_config.return_value = "/dev/sda /dev/sdb /dev/sdc"
_scan_devices.return_value = Ok(["/dev/sda", "/dev/sdb", "/dev/sdc"])
bricks = block.get_manual_bricks()
self.assertTrue(bricks.is_ok())
self.assertListEqual(["/dev/sda", "/dev/sdb", "/dev/sdc"],
bricks.value)
def testSetElevator(self):
pass
@mock.patch('lib.charm.gluster.block.is_block_device')
@mock.patch('lib.charm.gluster.block.device_initialized')
@mock.patch('lib.charm.gluster.block.log')
def testScanDevices(self, _log, _is_block_device, _device_initialized):
expected = [
block.BrickDevice(is_block_device=True, initialized=True,
mount_path="/mnt/sda", dev_path="/dev/sda"),
block.BrickDevice(is_block_device=True, initialized=True,
mount_path="/mnt/sdb", dev_path="/dev/sdb"),
block.BrickDevice(is_block_device=True, initialized=True,
mount_path="/mnt/sdc", dev_path="/dev/sdc")
]
_is_block_device.return_value = Ok(True)
_device_initialized.return_value = Ok(True)
result = block.scan_devices(["/dev/sda", "/dev/sdb", "/dev/sdc"])
self.assertTrue(result.is_ok())
self.assertListEqual(expected, result.value)
# @mock.patch('lib.charm.gluster.block.log')
# def testWeeklyDefrag(self, _log):
# block.weekly_defrag(mount="/mnt/sda",
# fs_type=block.FilesystemType.Xfs,
# interval="daily")
# pass
if __name__ == "__main__":
unittest.main()

View File

@ -1,77 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import unittest
from lib.charm.gluster.fstab import FsEntry, FsTab
from mock import patch
from result import Ok
class Test(unittest.TestCase):
@patch.object(FsTab, 'save_fstab')
def testAddEntry(self, _save_fstab):
_save_fstab.return_value = Ok(())
fstab = FsTab(os.path.join("unit_tests", "fstab"))
result = fstab.add_entry(FsEntry(
fs_spec="/dev/test",
mountpoint="/mnt/test",
vfs_type="xfs",
mount_options=["defaults"],
dump=False,
fsck_order=2
))
self.assertTrue(result.is_ok())
def testParser(self):
expected_results = [
FsEntry(
fs_spec="/dev/mapper/xubuntu--vg--ssd-root",
mountpoint=os.path.join(os.sep),
vfs_type="ext4",
mount_options=["noatime", "errors=remount-ro"],
dump=False,
fsck_order=1),
FsEntry(
fs_spec="UUID=378f3c86-b21a-4172-832d-e2b3d4bc7511",
mountpoint=os.path.join(os.sep, "boot"),
vfs_type="ext2",
mount_options=["defaults"],
dump=False,
fsck_order=2),
FsEntry(
fs_spec="/dev/mapper/xubuntu--vg--ssd-swap_1",
mountpoint="none",
vfs_type="swap",
mount_options=["sw"],
dump=False,
fsck_order=0),
FsEntry(
fs_spec="UUID=be8a49b9-91a3-48df-b91b-20a0b409ba0f",
mountpoint=os.path.join(os.sep, "mnt", "raid"),
vfs_type="ext4",
mount_options=["errors=remount-ro", "user"],
dump=False,
fsck_order=1)
]
with open('unit_tests/fstab', 'r') as f:
fstab = FsTab(os.path.join(os.sep, "fake"))
results = fstab.parse_entries(f)
for result in results.value:
self.assertTrue(result in expected_results)
if __name__ == "__main__":
unittest.main()

View File

@ -1,32 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from unittest.mock import MagicMock
import mock
from lib.charm.gluster import heal
class Test(unittest.TestCase):
@mock.patch('os.listdir')
def testGetHealCount(self, _listdir):
_listdir.return_value = ['xattrop_one', 'healme', 'andme']
brick = MagicMock(path='/export/brick1/')
count = heal.get_self_heal_count(brick)
self.assertEqual(2, count, "Expected 2 objects to need healing")
if __name__ == "__main__":
unittest.main()

View File

@ -1,148 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import uuid
import mock
from lib.charm.gluster import lib
from lib.charm.gluster.peer import Peer, State
from lib.charm.gluster.volume import Brick, Volume, VolumeType, Transport
class Test(unittest.TestCase):
@mock.patch('lib.charm.gluster.lib.log')
def testPeersAreNotReady(self, _log):
peer_list = [
Peer(uuid=uuid.UUID('3da2c343-7c67-499d-a6bb-68591cc72bc1'),
hostname="host-{}".format(
uuid.UUID('8fd64553-8925-41f5-b64a-1ba4d359c73b')),
status=State.PeerInCluster),
Peer(uuid=uuid.UUID('3da2c343-7c67-499d-a6bb-68591cc72bc2'),
hostname="host-{}".format(
uuid.UUID('8fd64553-8925-41f5-b64a-1ba4d359c73c')),
status=State.AcceptedPeerRequest),
]
result = lib.peers_are_ready(peer_list)
self.assertFalse(result)
@mock.patch('lib.charm.gluster.lib.log')
def testPeersAreReady(self, _log):
peer_list = [
Peer(uuid=uuid.UUID('3da2c343-7c67-499d-a6bb-68591cc72bc1'),
hostname="host-{}".format(
uuid.UUID('8fd64553-8925-41f5-b64a-1ba4d359c73b')),
status=State.Connected),
Peer(uuid=uuid.UUID('3da2c343-7c67-499d-a6bb-68591cc72bc2'),
hostname="host-{}".format(
uuid.UUID('8fd64553-8925-41f5-b64a-1ba4d359c73c')),
status=State.Connected),
]
result = lib.peers_are_ready(peer_list)
self.assertTrue(result)
def testFindNewPeers(self):
peer1 = Peer(uuid=uuid.UUID('3da2c343-7c67-499d-a6bb-68591cc72bc1'),
hostname="192.168.10.2",
status=State.PeerInCluster)
peer2 = Peer(uuid=uuid.UUID('3da2c343-7c67-499d-a6bb-68591cc72bc2'),
hostname="192.168.10.3",
status=State.AcceptedPeerRequest)
# glusterfs-0 and glusterfs-1 are in the cluster but only glusterfs-0
# is actually serving a brick. find_new_peers should
# return glusterfs-1 as a new peer
peers = {
"glusterfs-0": {
"address": peer1.hostname,
"bricks": ["/mnt/brick1"]
},
"glusterfs-1": {
"address": peer2.hostname,
"bricks": []
}}
existing_brick = Brick(peer=peer1,
brick_uuid=uuid.UUID(
'3da2c343-7c67-499d-a6bb-68591cc72bc1'),
path="/mnt/brick1",
is_arbiter=False)
volume_info = Volume(name="test",
vol_type=VolumeType.Replicate,
vol_id=uuid.uuid4(),
status="online", bricks=[existing_brick],
arbiter_count=0, disperse_count=0, dist_count=0,
replica_count=3, redundancy_count=0,
stripe_count=0, transport=Transport.Tcp,
snapshot_count=0, options={})
new_peers = lib.find_new_peers(peers=peers, volume_info=volume_info)
self.assertDictEqual(new_peers,
{"glusterfs-1": {
"address": "192.168.10.3",
"bricks": []}}
)
def testProduct(self):
peer1 = Peer(uuid=None,
hostname="server1",
status=None)
peer2 = Peer(uuid=None,
hostname="server2",
status=None)
expected = [
Brick(peer=peer1,
brick_uuid=None,
path="/mnt/brick1",
is_arbiter=False),
Brick(peer=peer2,
brick_uuid=None,
path="/mnt/brick1",
is_arbiter=False),
Brick(peer=peer1,
brick_uuid=None,
path="/mnt/brick2",
is_arbiter=False),
Brick(peer=peer2,
brick_uuid=None,
path="/mnt/brick2",
is_arbiter=False)
]
peers = {
"glusterfs-0": {
"address": "192.168.10.2",
"bricks": ["/mnt/brick1", "/mnt/brick2"]
},
"glusterfs-1": {
"address": "192.168.10.3",
"bricks": ["/mnt/brick1", "/mnt/brick2"]
}}
result = lib.brick_and_server_product(peers=peers)
self.assertListEqual(result, expected)
class TestTranslateToBytes(unittest.TestCase):
def setUp(self):
self.tests = {
"1TB": 1099511627776.0,
"8.2KB": 8396.8,
"2Bytes": 2.0
}
def test(self):
for test, correct in self.tests.items():
self.assertEqual(lib.translate_to_bytes(test), correct)
def tearDown(self):
pass
if __name__ == "__main__":
unittest.main()

View File

@ -1,65 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import uuid
from ipaddress import ip_address
import mock
from lib.charm.gluster import peer
from lib.charm.gluster.peer import Peer, State
class Test(unittest.TestCase):
@mock.patch('lib.charm.gluster.peer.peer_list')
def testGetPeer(self, _peer_list):
existing_peers = [
peer.Peer(
uuid=uuid.UUID("663bbc5b-c9b4-4a02-8b56-85e05e1b01c8"),
hostname=ip_address("172.31.12.7"),
status=peer.State.PeerInCluster),
peer.Peer(
uuid=uuid.UUID("15af92ad-ae64-4aba-89db-73730f2ca6ec"),
hostname=ip_address("172.31.21.242"),
status=peer.State.PeerInCluster)
]
_peer_list.return_value = existing_peers
result = peer.get_peer(hostname=ip_address('172.31.21.242'))
self.assertIs(result, existing_peers[1])
@mock.patch('lib.charm.gluster.peer.gpeer.pool')
def testPeerList(self, _peer_pool):
# Ignore parse_peer_list. We test that above
peer.peer_list()
# _run_command.assert_called_with(command="gluster",
# arg_list=["pool", "list", "--xml"],
# script_mode=False)
@mock.patch('lib.charm.gluster.peer.peer_list')
@mock.patch('lib.charm.gluster.peer.gpeer.probe')
def testPeerProbe(self, _peer_probe, _peer_list):
_peer_list.return_value = [
Peer(hostname="172.31.18.192",
uuid=uuid.UUID('832e2e64-24c7-4f05-baf5-42431fd801e2'),
status=State.Connected),
Peer(hostname="localhost",
uuid=uuid.UUID('d16f8c77-a0c5-4c31-a8eb-0cfbf7d7d1a5'),
status=State.Connected)]
# Probe a new hostname that's not currently in the cluster
peer.peer_probe(hostname='172.31.18.194')
_peer_probe.assert_called_with('172.31.18.194')
if __name__ == "__main__":
unittest.main()

View File

@ -1,22 +0,0 @@
"""
def test_parse():
shell_script =
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
#exit 0
c = std.io.Cursor.new(shell_script)
result = parse(c)
# println!("Result: :}", result)
buff = []
result2 = result.write(buff)
"""

View File

@ -1,157 +0,0 @@
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import uuid
from ipaddress import ip_address
import mock
from lib.charm.gluster import peer, volume
# mock_apt = mock.MagicMock()
# sys.modules['apt'] = mock_apt
# mock_apt.apt_pkg = mock.MagicMock()
peer_1 = peer.Peer(
uuid=uuid.UUID("39bdbbd6-5271-4c23-b405-cc0b67741ebc"),
hostname="172.20.21.231", status=None)
peer_2 = peer.Peer(
uuid=uuid.UUID("a51b28e8-6f06-4563-9a5f-48f3f31a6713"),
hostname="172.20.21.232", status=None)
peer_3 = peer.Peer(
uuid=uuid.UUID("57dd0230-50d9-452a-be8b-8f9dd9fe0264"),
hostname="172.20.21.233", status=None)
brick_list = [
volume.Brick(
brick_uuid=uuid.UUID("12d4bd98-e102-4174-b99a-ef76f849474e"),
peer=peer_1,
path="/mnt/sdb",
is_arbiter=False),
volume.Brick(
brick_uuid=uuid.UUID("a563d73c-ef3c-47c6-b50d-ddc800ef5dae"),
peer=peer_2,
path="/mnt/sdb",
is_arbiter=False),
volume.Brick(
brick_uuid=uuid.UUID("cc4a3f0a-f152-4e40-ab01-598f53eb83f9"),
peer=peer_3,
path="/mnt/sdb", is_arbiter=False)
]
class Test(unittest.TestCase):
def testGetLocalBricks(self):
pass
def testOkToRemove(self):
pass
@mock.patch("lib.charm.gluster.volume.unit_get")
@mock.patch("lib.charm.gluster.volume.get_host_ip")
def testGetLocalIp(self, _get_host_ip, _unit_get):
_unit_get.return_value = "192.168.1.6"
_get_host_ip.return_value = "192.168.1.6"
result = volume.get_local_ip()
self.assertTrue(result.is_ok())
self.assertTrue(result.value == ip_address("192.168.1.6"))
def testParseQuotaList(self):
expected_quotas = [
volume.Quota(path="/", hard_limit=10240, soft_limit=8192,
soft_limit_percentage="80%", used=0, avail=10240,
soft_limit_exceeded="No", hard_limit_exceeded="No"),
volume.Quota(path="/test2", hard_limit=10240, soft_limit=8192,
soft_limit_percentage="80%", used=0, avail=10240,
soft_limit_exceeded="No", hard_limit_exceeded="No"),
]
with open('unit_tests/quota_list.xml', 'r') as xml_output:
lines = xml_output.readlines()
result = volume.parse_quota_list("".join(lines))
self.assertTrue(result.is_ok())
self.assertTrue(len(result.value) == 2)
for quota in result.value:
self.assertTrue(quota in expected_quotas)
def testVolumeAddBrick(self):
pass
@mock.patch('lib.charm.gluster.volume.volume.create')
def testVolumeCreateArbiter(self, _volume_create):
volume.volume_create_arbiter(vol="test", replica_count=3,
arbiter_count=1,
transport=volume.Transport.Tcp,
bricks=brick_list, force=False)
_volume_create.assert_called_with(
volname='test', replica=3, arbiter=1, transport='tcp',
volbricks=[str(b) for b in brick_list], force=False)
@mock.patch('lib.charm.gluster.volume.volume.create')
def testVolumeCreateDistributed(self, _volume_create):
volume.volume_create_distributed(vol="test",
transport=volume.Transport.Tcp,
bricks=brick_list, force=False)
_volume_create.assert_called_with(volname="test", transport='tcp',
volbricks=[str(b) for b in
brick_list], force=False)
@mock.patch('lib.charm.gluster.volume.volume.create')
def testVolumeCreateErasure(self, _volume_create):
volume.volume_create_erasure(vol="test", disperse_count=1,
redundancy_count=3,
transport=volume.Transport.Tcp,
bricks=brick_list, force=False)
_volume_create.assert_called_with(
volname='test', disperse=1, redundancy=3, transport='tcp',
volbricks=[str(b) for b in brick_list], force=False)
@mock.patch('lib.charm.gluster.volume.volume.create')
def testVolumeCreateReplicated(self, _volume_create):
volume.volume_create_replicated(vol="test", replica_count=3,
transport=volume.Transport.Tcp,
bricks=brick_list, force=False)
_volume_create.assert_called_with(
volname='test', replica=3, transport='tcp',
volbricks=[str(b) for b in brick_list], force=False)
@mock.patch('lib.charm.gluster.volume.volume.create')
def testVolumeCreateStriped(self, _volume_create):
volume.volume_create_striped(vol="test", stripe_count=3,
transport=volume.Transport.Tcp,
bricks=[str(b) for b in brick_list],
force=False)
_volume_create.assert_called_with(
volname='test', stripe=3, transport='tcp',
volbricks=[str(b) for b in brick_list], force=False)
@mock.patch('lib.charm.gluster.volume.volume.create')
def testVolumeCreateStripedReplicated(self, _volume_create):
volume.volume_create_striped_replicated(vol="test", stripe_count=1,
replica_count=3,
transport=volume.Transport.Tcp,
bricks=brick_list, force=False)
_volume_create.assert_called_with(
volname='test', stripe=1, replica=3,
transport='tcp', volbricks=[str(b) for b in brick_list],
force=False)
def testVolumeSetBitrotOption(self):
pass
def testVolumeSetOptions(self):
pass
if __name__ == "__main__":
unittest.main()

View File

@ -1,162 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volInfo>
<volumes>
<volume>
<name>chris</name>
<id>f96dcd18-3235-4dcc-85cf-77c5cdec0951</id>
<status>1</status>
<statusStr>Started</statusStr>
<snapshotCount>0</snapshotCount>
<brickCount>12</brickCount>
<distCount>3</distCount>
<stripeCount>1</stripeCount>
<replicaCount>3</replicaCount>
<arbiterCount>0</arbiterCount>
<disperseCount>0</disperseCount>
<redundancyCount>0</redundancyCount>
<type>7</type>
<typeStr>Distributed-Replicate</typeStr>
<transport>0</transport>
<xlators/>
<bricks>
<brick uuid="663bbc5b-c9b4-4a02-8b56-85e05e1b01c8">
172.31.12.7:/mnt/xvdb
<name>172.31.12.7:/mnt/xvdb</name>
<hostUuid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="15af92ad-ae64-4aba-89db-73730f2ca6ec">
172.31.21.242:/mnt/xvdb
<name>172.31.21.242:/mnt/xvdb</name>
<hostUuid>15af92ad-ae64-4aba-89db-73730f2ca6ec
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="cebf02bb-a304-4058-986e-375e2e1e5313">
172.31.39.30:/mnt/xvdb
<name>172.31.39.30:/mnt/xvdb</name>
<hostUuid>cebf02bb-a304-4058-986e-375e2e1e5313
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="663bbc5b-c9b4-4a02-8b56-85e05e1b01c8">
172.31.12.7:/mnt/xvdh
<name>172.31.12.7:/mnt/xvdh</name>
<hostUuid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="15af92ad-ae64-4aba-89db-73730f2ca6ec">
172.31.21.242:/mnt/xvdh
<name>172.31.21.242:/mnt/xvdh</name>
<hostUuid>15af92ad-ae64-4aba-89db-73730f2ca6ec
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="cebf02bb-a304-4058-986e-375e2e1e5313">
172.31.39.30:/mnt/xvdh
<name>172.31.39.30:/mnt/xvdh</name>
<hostUuid>cebf02bb-a304-4058-986e-375e2e1e5313
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="663bbc5b-c9b4-4a02-8b56-85e05e1b01c8">
172.31.12.7:/mnt/xvdg
<name>172.31.12.7:/mnt/xvdg</name>
<hostUuid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="15af92ad-ae64-4aba-89db-73730f2ca6ec">
172.31.21.242:/mnt/xvdg
<name>172.31.21.242:/mnt/xvdg</name>
<hostUuid>15af92ad-ae64-4aba-89db-73730f2ca6ec
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="cebf02bb-a304-4058-986e-375e2e1e5313">
172.31.39.30:/mnt/xvdg
<name>172.31.39.30:/mnt/xvdg</name>
<hostUuid>cebf02bb-a304-4058-986e-375e2e1e5313
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="663bbc5b-c9b4-4a02-8b56-85e05e1b01c8">
172.31.12.7:/mnt/xvdf
<name>172.31.12.7:/mnt/xvdf</name>
<hostUuid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="15af92ad-ae64-4aba-89db-73730f2ca6ec">
172.31.21.242:/mnt/xvdf
<name>172.31.21.242:/mnt/xvdf</name>
<hostUuid>15af92ad-ae64-4aba-89db-73730f2ca6ec
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
<brick uuid="cebf02bb-a304-4058-986e-375e2e1e5313">
172.31.39.30:/mnt/xvdf
<name>172.31.39.30:/mnt/xvdf</name>
<hostUuid>cebf02bb-a304-4058-986e-375e2e1e5313
</hostUuid>
<isArbiter>0</isArbiter>
</brick>
</bricks>
<optCount>11</optCount>
<options>
<option>
<name>cluster.favorite-child-policy</name>
<value>size</value>
</option>
<option>
<name>performance.rda-cache-limit</name>
<value>20971520</value>
</option>
<option>
<name>performance.readdir-ahead</name>
<value>On</value>
</option>
<option>
<name>performance.parallel-readdir</name>
<value>On</value>
</option>
<option>
<name>diagnostics.stats-dnscache-ttl-sec</name>
<value>3600</value>
</option>
<option>
<name>diagnostics.stats-dump-interval</name>
<value>30</value>
</option>
<option>
<name>diagnostics.fop-sample-interval</name>
<value>5</value>
</option>
<option>
<name>diagnostics.count-fop-hits</name>
<value>On</value>
</option>
<option>
<name>diagnostics.latency-measurement</name>
<value>On</value>
</option>
<option>
<name>transport.address-family</name>
<value>inet</value>
</option>
<option>
<name>nfs.disable</name>
<value>Off</value>
</option>
</options>
</volume>
<count>1</count>
</volumes>
</volInfo>
</cliOutput>

View File

@ -1,10 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volList>
<count>1</count>
<volume>chris</volume>
</volList>
</cliOutput>

View File

@ -1,231 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volStatus>
<volumes>
<volume>
<volName>chris</volName>
<nodeCount>18</nodeCount>
<node>
<hostname>172.31.12.7</hostname>
<path>/mnt/xvdb</path>
<peerid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23772</pid>
</node>
<node>
<hostname>172.31.21.242</hostname>
<path>/mnt/xvdb</path>
<peerid>15af92ad-ae64-4aba-89db-73730f2ca6ec</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23871</pid>
</node>
<node>
<hostname>172.31.39.30</hostname>
<path>/mnt/xvdb</path>
<peerid>cebf02bb-a304-4058-986e-375e2e1e5313</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>24261</pid>
</node>
<node>
<hostname>172.31.12.7</hostname>
<path>/mnt/xvdh</path>
<peerid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8</peerid>
<status>1</status>
<port>49153</port>
<ports>
<tcp>49153</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23791</pid>
</node>
<node>
<hostname>172.31.21.242</hostname>
<path>/mnt/xvdh</path>
<peerid>15af92ad-ae64-4aba-89db-73730f2ca6ec</peerid>
<status>1</status>
<port>49153</port>
<ports>
<tcp>49153</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23890</pid>
</node>
<node>
<hostname>172.31.39.30</hostname>
<path>/mnt/xvdh</path>
<peerid>cebf02bb-a304-4058-986e-375e2e1e5313</peerid>
<status>1</status>
<port>49153</port>
<ports>
<tcp>49153</tcp>
<rdma>N/A</rdma>
</ports>
<pid>24280</pid>
</node>
<node>
<hostname>172.31.12.7</hostname>
<path>/mnt/xvdg</path>
<peerid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8</peerid>
<status>1</status>
<port>49154</port>
<ports>
<tcp>49154</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23810</pid>
</node>
<node>
<hostname>172.31.21.242</hostname>
<path>/mnt/xvdg</path>
<peerid>15af92ad-ae64-4aba-89db-73730f2ca6ec</peerid>
<status>1</status>
<port>49154</port>
<ports>
<tcp>49154</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23909</pid>
</node>
<node>
<hostname>172.31.39.30</hostname>
<path>/mnt/xvdg</path>
<peerid>cebf02bb-a304-4058-986e-375e2e1e5313</peerid>
<status>1</status>
<port>49154</port>
<ports>
<tcp>49154</tcp>
<rdma>N/A</rdma>
</ports>
<pid>24299</pid>
</node>
<node>
<hostname>172.31.12.7</hostname>
<path>/mnt/xvdf</path>
<peerid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8</peerid>
<status>1</status>
<port>49155</port>
<ports>
<tcp>49155</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23829</pid>
</node>
<node>
<hostname>172.31.21.242</hostname>
<path>/mnt/xvdf</path>
<peerid>15af92ad-ae64-4aba-89db-73730f2ca6ec</peerid>
<status>1</status>
<port>49155</port>
<ports>
<tcp>49155</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23928</pid>
</node>
<node>
<hostname>172.31.39.30</hostname>
<path>/mnt/xvdf</path>
<peerid>cebf02bb-a304-4058-986e-375e2e1e5313</peerid>
<status>1</status>
<port>49155</port>
<ports>
<tcp>49155</tcp>
<rdma>N/A</rdma>
</ports>
<pid>24318</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>cebf02bb-a304-4058-986e-375e2e1e5313</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>24680</pid>
</node>
<node>
<hostname>Self-heal Daemon</hostname>
<path>localhost</path>
<peerid>cebf02bb-a304-4058-986e-375e2e1e5313</peerid>
<status>1</status>
<port>N/A</port>
<ports>
<tcp>N/A</tcp>
<rdma>N/A</rdma>
</ports>
<pid>24338</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>172.31.21.242</path>
<peerid>15af92ad-ae64-4aba-89db-73730f2ca6ec</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>24132</pid>
</node>
<node>
<hostname>Self-heal Daemon</hostname>
<path>172.31.21.242</path>
<peerid>15af92ad-ae64-4aba-89db-73730f2ca6ec</peerid>
<status>1</status>
<port>N/A</port>
<ports>
<tcp>N/A</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23948</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>172.31.12.7</path>
<peerid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>24032</pid>
</node>
<node>
<hostname>Self-heal Daemon</hostname>
<path>172.31.12.7</path>
<peerid>663bbc5b-c9b4-4a02-8b56-85e05e1b01c8</peerid>
<status>1</status>
<port>N/A</port>
<ports>
<tcp>N/A</tcp>
<rdma>N/A</rdma>
</ports>
<pid>23849</pid>
</node>
<tasks/>
</volume>
</volumes>
</volStatus>
</cliOutput>