Add Ceph roadmap items

This commit is contained in:
Chris Holcombe 2016-05-27 10:57:51 -07:00
parent 705d62b5bd
commit 93693300e2
4 changed files with 417 additions and 0 deletions

View File

@ -0,0 +1,96 @@
..
Copyright 2016, Canonical UK
This work is licensed under a Creative Commons Attribution 3.0
Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. Please do not delete
any of the sections in this template. If you have nothing to say
for a whole section, just write: "None". For help with syntax, see
http://sphinx-doc.org/rest.html To test out your formatting, see
http://www.tele3.cz/jbar/rest/rest.html
===============================
Ceph Autotune
===============================
Ceph isn't as fast as it could be out of the box.
Problem Description
===================
Currently we dont do any tuning of Ceph after installing it. Theres many little things that have to be tweaked to get ceph to be performant. There are many mailing list messages of users becoming frustrated with the slow performance of Ceph. The goal of this epic is to capture the low hanging fruit so everyone can have a better experience with Ceph.
Proposed Change
===============
I'm proposing several areas where easy improvements can be made. There's several areas that
can be improved such as HDD read ahead and max hardware sector settings. We can also
tune the sysctl network settings on 10 and 40Gb networks. Finally I've noticed that
IRQ alignment on multisocket motherboard isn't ideal. We can use numactl to pin the
IRQs for all the components in the IO path from network to osd to disk. Of all the changes
probably the IRQ pinning will be the most difficult.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
cholcombe973
Gerrit Topic
------------
Use Gerrit topic "<topic_name>" for all patches related to this spec.
.. code-block:: bash
git-review -t performance-optimizations
Work Items
----------
1. HDD Read ahead settings
* Often times the read ahead settings for hdds are too small for storage servers.
2. 10/40Gb sysctl settings
* 10Gb and 40Gb networks need special sysctl settings to improve their performance.
3. HDD max hardware sectors
* Making the max hardware sectors as large as possible allows bigger sequential writes and therefore better performance.
4. IRQ alignment for multisocket motherboards
* It has been shown on multi socket motherboards that Linux does a bad job of lining up IRQs and results in lots of swapping of cache memory between CPUs. This significantly degrades Cephs performance.
Repositories
------------
https://github.com/openstack/charm-ceph-osd/
Documentation
-------------
Updates to the README's in the impacted charms will be made as part of this
change.
Security
--------
No additional security concerns.
Testing
-------
Code changes will be covered by unit tests; functional testing will be done
using a Mojo specification.
Dependencies
============
- charm-layering

View File

@ -0,0 +1,99 @@
..
Copyright 2016, Canonical UK
This work is licensed under a Creative Commons Attribution 3.0
Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. Please do not delete
any of the sections in this template. If you have nothing to say
for a whole section, just write: "None". For help with syntax, see
http://sphinx-doc.org/rest.html To test out your formatting, see
http://www.tele3.cz/jbar/rest/rest.html
===============================
Ceph Broker
===============================
Connect Ceph to existing Openstacks
Problem Description
===================
Some customers have asked how to connect a Juju deployed Ceph to an existing
OpenStack cluster that was deployed with home grown or someone elses tooling.
This causes a problem in that it is not possible to relate Ceph to the
OpenStack cluster.
Proposed Change
===============
The idea here is to create a charm that can provide the information that Openstack
needs to connect to the existing Ceph cluster.
Alternatives
------------
The alternative is manually connecting the Ceph and OpenStack together. This is
fine for some customers but not acceptable for bootstack.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
ChrisMacNaughton
Gerrit Topic
------------
Use Gerrit topic "<topic_name>" for all patches related to this spec.
.. code-block:: bash
git-review -t <topic_name>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Repositories
------------
Will any new git repositories need to be created?
Documentation
-------------
Will this require a documentation change? If so, which documents?
Will it impact developer workflow? Will additional communication need
to be made?
Security
--------
Does this introduce any additional security risks, or are there
security-related considerations which should be discussed?
Testing
-------
What tests will be available or need to be constructed in order to
validate this? Unit/functional tests, development
environments/servers, etc.
Dependencies
============
- Include specific references to specs and/or stories, or in
other projects, that this one either depends on or is related to.
- Does this feature require any new library or program dependencies
not already in use?

View File

@ -0,0 +1,97 @@
..
Copyright 2016, Canonical UK
This work is licensed under a Creative Commons Attribution 3.0
Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. Please do not delete
any of the sections in this template. If you have nothing to say
for a whole section, just write: "None". For help with syntax, see
http://sphinx-doc.org/rest.html To test out your formatting, see
http://www.tele3.cz/jbar/rest/rest.html
===============================
CephFS
===============================
CephFS has recently gone GA and we can now move forward with offering
this to people as a charm. Up until this point it was considered too
experimental to store production data on.
Problem Description
===================
A new CephFS charm will be created. This charm will leverage the Ceph base layer.
Proposed Change
===============
A new CephFS charm will be created. This charm will leverage the Ceph base layer.
The effort required should be small once the Ceph base layer is ready to go.
Alternatives
------------
GlusterFS is an alternative to CephFS and will probably fit many users needs.
However there are users who don't want to deploy additional hardware to create
another cluster so this can be convenient in those cases.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
cholcombe973
Gerrit Topic
------------
Use Gerrit topic "<topic_name>" for all patches related to this spec.
.. code-block:: bash
git-review -t cephfs
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
1. Create the ceph-fs charm utilizing the base Ceph layer
- Expose interesting config items such as: mds cache size, mds bal mode
- Create actions to allow blacklisting of misbehaving clients, breaking of locks,
creating new filesystems, add_data_pool, remove_data_pool, set quotas, etc.
2. Create an interface to allow other charms to mount the filesytem.
Repositories
------------
1. github.com/openstack/charm-ceph-fs
Documentation
-------------
A README.md will be created for the charm as part of the normal development
workflow.
Security
--------
No additional security concerns.
Testing
-------
A mojo spec will be developed to exercise this charm along with amulet tests
if needed.
Dependencies
============
- This project depends on the Ceph Layering project being successful.

View File

@ -0,0 +1,125 @@
..
Copyright 2016, Canonical UK
This work is licensed under a Creative Commons Attribution 3.0
Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. Please do not delete
any of the sections in this template. If you have nothing to say
for a whole section, just write: "None". For help with syntax, see
http://sphinx-doc.org/rest.html To test out your formatting, see
http://www.tele3.cz/jbar/rest/rest.html
===============================
Ceph Charms Layering
===============================
The Ceph charms need to shed technical debt to allow faster iteration.
Problem Description
===================
There is lots of duplicate code in the ceph, ceph-mon, ceph-osd and ceph-radosgw
charms. The aim is to eliminate most of that using the new Juju layers approach.
Proposed Change
===============
The new layering/reactive way of charming offers the Ceph charms significant
benefits. Between ceph-mon, ceph-osd, ceph and ceph-radosgw we have a lot of
duplicate code. Some of this code has been pulled out and put into the charmhelpers
library but that can't be done for everything. The juju layering approach to charms
will allow the rest of the shared code to be consolidated down into a base layer.
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
ChrisMacNaughton
Gerrit Topic
------------
Use Gerrit topic "<topic_name>" for all patches related to this spec.
.. code-block:: bash
git-review -t <topic_name>
Work Items
----------
1. Create a ceph base layer. These is the biggest piece of the project. All
duplicate code that is shared between ceph ceph-mon and ceph-osd will be consolidated
down into this base layer.
- Copy over unit tests
2. Create a ceph-mon layer that utilizes the ceph base layer. Once the base layer
is settled the next step will be to build the ceph-mon charm on top of the base
layer.
- Copy over unit tests
3. Create a ceph-osd layer that utilizes the ceph base layer
- Copy over unit tests
4. Recreate the ceph charm by combining the ceph-osd + ceph-mon layers. With layering
the ceph charm essentially becomes just a combination of the other layers. We get this
almost for free and it should be easy to maintain all 3 charms going forward.
- Copy over unit tests
Repositories
------------
Will any new git repositories need to be created? Yes there will initially be
new repos created however these will migrate to the openstack repo.
1. https://github.com/ChrisMacNaughton/juju-interface-ceph-client
2. https://github.com/ChrisMacNaughton/juju-interface-ceph
3. https://github.com/ChrisMacNaughton/juju-interface-ceph-radosgw
4. https://github.com/ChrisMacNaughton/juju-interface-ceph-osd
5. https://github.com/ChrisMacNaughton/juju-layer-ceph-mon
6. https://github.com/ChrisMacNaughton/juju-layer-ceph-osd
Documentation
-------------
There shouldn't be any documentation changes needed for this. The aim to have
identical functionality to what was there before.
This will impact developer workflow because at some point we're going to have to
stop all posts to all the repos and cut over to this new version of the charm.
The repositories are also going to change. The goal is to publish them layered
version of the charm rather than the compiled bits. The test automation will
then compile the layered charm and use that for testing instead of just
cloning the repo like it does today.
Security
--------
No additional security concerns.
Testing
-------
Because this is such a radical change from what we had
before the only way to accurately test it is to show identical functionality
to what we had before. A mojo spec will be used to demonstrate this.
Dependencies
============
- Include specific references to specs and/or stories, or in
other projects, that this one either depends on or is related to.