Trio2o is to provide APIs gateway for multiple OpenStack clouds to act as a single OpenStack cloud.
Go to file
joehuang 180f68eac8 Remove networking related code from the Trio2o
1. What is the problem?
Networking related code is still in the repository of
the Trio2o project.

2. What is the solution to the problem?
According to the blueprint for the Trio2o cleaning:
https://blueprints.launchpad.net/trio2o/+spec/trio2o-code-cleaning
Networking related code which is forked from the Tricircle
repository should be removed from the Trio2o repository.
After the cleanning, the Trio2o should be able to run independently.
There are lots of things to clean and update, and have to do it in
one huge patch, otherwise the code in Trio2o will not be able to run
and tested properply:
1). Remove netwoking operaion from server controller
2). Update devstack script
3). Update installation guide
4). Update README
5). Remove network folder and network related unit tests
6). Rename Tricircle to Trio2o in all source code

THE MEANING OF FILE OPERATION:
D: delete a file
R: rename a file to another name
A: add a new file
C: copy a file

3. What the features need to be implemented to the Tricircle to realize
the solution?
No new features.

Change-Id: I0b48ee38280e25ba6294ca3d5b7a0673cb368ed4
Signed-off-by: joehuang <joehuang@huawei.com>
2016-11-14 02:12:48 -05:00
cmd Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
devstack Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
doc/source Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
etc Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
releasenotes Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
specs Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
trio2o Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
.coveragerc Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
.gitignore Change the gate to OpenStack infrastrucure 2015-12-15 12:09:09 +08:00
.gitreview Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
.testr.conf Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
CONTRIBUTING.rst Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
HACKING.rst Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
LICENSE Add source code to Tricircle 2014-09-25 15:56:40 +08:00
MANIFEST.in Align project files structure with cookiecutter template 2016-07-05 15:08:41 +08:00
README.rst Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
requirements.txt Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
setup.cfg Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
setup.py Manual sync from global-requirements 2016-05-26 13:23:29 +10:00
test-requirements.txt Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00
tox.ini Remove networking related code from the Trio2o 2016-11-14 02:12:48 -05:00

README.rst

Trio2o

The Trio2o provides an OpenStack API gateway to allow multiple OpenStack instances, spanning in one site or multiple sites or in hybrid cloud, to be managed as a single OpenStack cloud.

The Trio2o and these managed OpenStack instances will use shared KeyStone (with centralized or distributed deployment) or federated KeyStones for identity management.

The Trio2o presents one big region to the end user in KeyStone. And each OpenStack instance called a pod is a sub-region of the Trio2o in KeyStone, and usually not visible to end user directly.

The Trio2o acts as OpenStack API gateway, can handle OpenStack API calls, schedule one proper OpenStack instance if needed during the API calls handling, forward the API calls to the appropriate OpenStack instance.

The end user can see avaialbility zone(AZ) and use AZ to provision VM, Volume, through the Trio2o. One AZ can include many OpenStack instances, the Trio2o can schedule and bind OpenStack instance for the tenant inside one AZ. A tenant's resources could be bound to multiple specific bottom OpenStack instances in one or multiple AZs automatically.