From c1c78e3c791758b173bb690f42d5279cbaf25fa3 Mon Sep 17 00:00:00 2001 From: daz Date: Wed, 23 Sep 2015 16:26:53 +1000 Subject: [PATCH] Revise the Upgrade chapter 1. Removed duplicated upgrade content 2. Reorganized content 3. Modified upgrade process to be more generic and applicable to operators who are the target audience Change-Id: Ib216d132f4c0e2ee21eacef14bf60b2fe1f12073 Closes-Bug: #1496195 --- doc/openstack-ops/ch_ops_upgrades.xml | 2874 ++----------------------- 1 file changed, 221 insertions(+), 2653 deletions(-) diff --git a/doc/openstack-ops/ch_ops_upgrades.xml b/doc/openstack-ops/ch_ops_upgrades.xml index 50a98371..74681446 100644 --- a/doc/openstack-ops/ch_ops_upgrades.xml +++ b/doc/openstack-ops/ch_ops_upgrades.xml @@ -11,14 +11,74 @@ With the exception of Object Storage, upgrading from one version of OpenStack to another can take a great deal of effort. - Until the situation improves, this chapter provides some guidance - on the operational aspects that you should consider for performing - an upgrade based on detailed steps for a basic + This chapter provides some guidance on the operational aspects + that you should consider for performing an upgrade for a basic architecture. -
- Pre-Upgrade Testing Environment +
+ Pre-upgrade considerations +
+ Upgrade planning + + + Thoroughly review the + release notes to learn about new, updated, and deprecated features. + Find incompatibilities between versions. + + + Consider the impact of an upgrade to users. The upgrade process + interrupts management of your environment including the dashboard. + If you properly prepare for the upgrade, existing instances, networking, + and storage should continue to operate. However, instances might experience + intermittent network interruptions. + + + Consider the approach to upgrading your environment. You can perform + an upgrade with operational instances, but this is a dangerous approach. + You might consider using live migration to temporarily relocate instances + to other compute nodes while performing upgrades. However, you must + ensure database consistency throughout the process; otherwise your + environment might become unstable. Also, don't forget to provide + sufficient notice to your users, including giving them plenty of + time to perform their own backups. + + + Consider adopting structure and options from the service + configuration files and merging them with existing configuration + files. The + OpenStack Configuration Reference + contains new, updated, and deprecated options for most + services. + + + Like all major system upgrades, your upgrade could fail for + one or more reasons. You should prepare for this situation by + having the ability to roll back your environment to the previous + release, including databases, configuration files, and packages. + We provide an example process for rolling back your environment in + . + upgrading + process overview + + rollbacks + preparing for + + upgrading + preparation for + + + + Develop an upgrade procedure and assess it thoroughly by + using a test environment similar to your production + environment. + + +
+
+ Pre-upgrade testing environment The most important step is the pre-upgrade testing. If you are upgrading immediately after release of a new version, undiscovered bugs might hinder your progress. Some deployers @@ -30,17 +90,15 @@ pre-upgrade testing - Even if you have what seems to be a near-identical - architecture as the one described in this guide, each OpenStack - cloud is different. As a result, you must still test upgrades - between versions in your environment. For this, you need an + Each OpenStack cloud is different even if you have a near-identical + architecture as described in this guide. As a result, you must still + test upgrades between versions in your environment using an approximate clone of your environment. However, that is not to say that it needs to be the same - size or use identical hardware as the production environment—few - of us have that luxury. It is important to consider the hardware - and scale of the cloud that you are upgrading, but these tips - can help you avoid that incredible cost: upgrading controlling cost of @@ -49,27 +107,22 @@ Use your own cloud - The simplest place to start testing the next version of OpenStack is by setting up a new environment inside - your own cloud. This might seem odd—especially the double - virtualization used in running compute nodes—but it's a + your own cloud. This might seem odd, especially the double + virtualization used in running compute nodes. But it is a sure way to very quickly test your configuration. - Use a public cloud - Especially because your own cloud is unlikely to have - sufficient space to scale test to the level of the entire - cloud, consider using a public cloud to test the - scalability limits of your cloud controller configuration. - Most public clouds bill by the hour, which means it can be - inexpensive to perform even a test with many - nodes. + Consider using a public cloud to test the scalability + limits of your cloud controller configuration. Most public + clouds bill by the hour, which means it can be inexpensive + to perform even a test with many nodes. cloud controllers scalability and @@ -81,10 +134,10 @@ If you use an external storage plug-in or shared file - system with your cloud, in many cases, you can test - whether it works by creating a second share or endpoint. - This action enables you to test the system before - entrusting the new version onto your storage. + system with your cloud, you can test whether it works by + creating a second share or endpoint. This allows you to + test the system before entrusting the new version on to your + storage. @@ -121,170 +174,40 @@ Either approach is valid. Use the approach that matches your experience. An upgrade pre-testing system is excellent for getting the - configuration to work; however, it is important to note that the + configuration to work. However, it is important to note that the historical use of the system and differences in user interaction - can affect the success of upgrades, too. We've seen experiences - where database migrations encountered a bug (later fixed!) - because of slight table differences between fresh - installs and those that migrated from one version to another. - If possible, we highly recommended that you dump your + can affect the success of upgrades. + If possible, we highly recommend that you dump your production database tables and test the upgrade in your development - environment using this data. As stated above, several MySQL - bugs have been uncovered during database migrations that will - only be hit on large real datasets. You do not want to find this - out in the middle of a production outage. + environment using this data. Several MySQL bugs have been uncovered + during database migrations because of slight table differences between + a fresh installation and tables that migrated from one version to another. + This will have impact on large real datasets, which you do not want to + encounter during a production outage. Artificial scale testing can go only so far. After your cloud is upgraded, you must pay careful attention to the performance aspects of your cloud.
-
- Preparing for a Rollback - - Like all major system upgrades, your upgrade could fail for - one or more difficult-to-determine reasons. You should prepare - for this situation by leaving the ability to roll back your - environment to the previous release, including databases, - configuration files, and packages. We provide an example process - for rolling back your environment in . - upgrading - process overview - - rollbacks - preparing for - - upgrading - preparation for - -
- -
- Upgrades - - The upgrade process generally follows these steps: - - - - Perform some "cleaning" of the environment prior to - starting the upgrade process to ensure a consistent state. - For example, instances not fully purged from the system - after deletion might cause indeterminate behavior. - - - - Read the release notes and documentation. - - - - Find incompatibilities between your versions. - - - - Develop an upgrade procedure and assess it thoroughly by - using a test environment similar to your production - environment. - - - - Make a full database backup of your production data. As of - Kilo, database downgrades are not supported, and the only method - available to get back to a prior database version will be to restore - from backup. - - - - Run the upgrade procedure on the production - environment. - - - - You can perform an upgrade with operational instances, but - this strategy can be dangerous. You might consider using live - migration to temporarily relocate instances to other compute - nodes while performing upgrades. However, you must ensure - database consistency throughout the process; otherwise your - environment might become unstable. Also, don't forget to provide - sufficient notice to your users, including giving them plenty of - time to perform their own backups. - - The following order for service upgrades seems the most - successful: - - - - Upgrade OpenStack Identity. - - - - Upgrade the OpenStack Image service. - - - - Upgrade OpenStack Compute, including networking - components. - - - - Upgrade OpenStack Block Storage. - - - - Upgrade the OpenStack dashboard. - - - - The general upgrade process includes the following - steps: - - - - Create a backup of configuration files and - databases. - - - - Update the configuration files according to the release - notes. - - - - Upgrade the packages by using your distribution's - package manager. - - - - Stop services, update database schemas, and restart - services. - - - - Verify proper operation of your environment. - - -
-
Upgrade Levels - Upgrade levels are a feature added to OpenStack Compute in the + Upgrade levels are a feature added to OpenStack Compute since the Grizzly release to provide version locking on the RPC (Message Queue) communications between the various Compute services. This functionality is an important piece of the puzzle when it comes to live upgrades and is conceptually similar to the existing API versioning that allows OpenStack services of - different versions to communicate without issue, for example Grizzly - Compute can still make Grizzly Identity API calls even if Identity - is running Icehouse. + different versions to communicate without issue. Without upgrade levels, an X+1 version Compute service can receive and understand X version RPC messages, but it can only send out X+1 version RPC messages. For example, if a nova-conductor - process has been upgraded to Icehouse, then the conductor service - will be able to understand messages from Havana + process has been upgraded to X+1 version, then the conductor service + will be able to understand messages from X version nova-compute processes, but those compute services will not be able to understand messages sent by the conductor service. @@ -295,19 +218,19 @@ options allow the specification of RPC version numbers if desired, but release name alias are also supported. For example: [upgrade_levels] -compute=havana -conductor=havana -scheduler=havana +compute=X+1 +conductor=X+1 +scheduler=X+1 will keep the RPC version locked across the specified services - to the RPC version used in Havana. As all instances of a particular + to the RPC version used in X+1. As all instances of a particular service are upgraded to the newer version, the corresponding line can be removed from nova.conf. Using this functionality, ideally one would lock the RPC version to the OpenStack version being upgraded from on nova-compute nodes, to - ensure that, for example Havana + ensure that, for example X+1 version nova-compute - processes will continue to work with Grizzly + processes will continue to work with X version nova-conductor processes while the upgrade completes. Once the upgrade of nova-compute @@ -318,1812 +241,43 @@ scheduler=havana nova.conf.
- -
- How to Perform an Upgrade from Grizzly to - Havana—Ubuntu - - - - For this section, we assume that you are starting with the - architecture provided in the OpenStack OpenStack - Installation Guide and upgrading to the - same architecture for Havana. All nodes should run Ubuntu 12.04 - LTS. This section primarily addresses upgrading core OpenStack - services, such as Identity, Image service, Compute including networking, - Block Storage, and the dashboard. - upgrading - Grizzly to Havana (Ubuntu) - - -
- Impact on Users - - The upgrade process interrupts management of your - environment, including the dashboard. If you properly prepare - for this upgrade, tenant instances continue to operate - normally. -
- -
- Upgrade Considerations - - Always review the release notes - before performing an upgrade to learn about newly available - features that you might want to enable and deprecated features - that you should disable. -
- -
- Perform a Backup - - Save the configuration files on all nodes, as shown - here: - # for i in keystone glance nova cinder openstack-dashboard; \ - do mkdir $i-grizzly; \ - done -# for i in keystone glance nova cinder openstack-dashboard; \ - do cp -r /etc/$i/* $i-grizzly/; \ - done - - You can modify this example script on each node to - handle different services. - - - Back up all databases on the controller: - - # mysqldump -u root -p --opt --add-drop-database \ ---all-databases > grizzly-db-backup.sql -
- -
- Manage Repositories - - On all nodes, remove the repository for Grizzly packages - and add the repository for Havana packages: - - # apt-add-repository -r cloud-archive:grizzly -# apt-add-repository cloud-archive:havana - - - Make sure any automatic updates are disabled. - -
- -
- Update Configuration Files - - Update the glance configuration on the controller node for - compatibility with Havana. - - Add or modify the following keys in the - /etc/glance/glance-api.conf and - /etc/glance/glance-registry.conf - files: - - [keystone_authtoken] -auth_uri = http://controller:5000 -auth_host = controller -admin_tenant_name = service -admin_user = glance -admin_password = GLANCE_PASS - -[paste_deploy] -flavor = keystone - - If currently present, remove the following key from the - [filter:authtoken] section in the - /etc/glance/glance-api-paste.ini and - /etc/glance/glance-registry-paste.ini - files: - - [filter:authtoken] -flavor = keystone - - Update the nova configuration on all nodes for - compatibility with Havana. - - Add the [database] section and - associated key to the /etc/nova/nova.conf - file: - - [database] -connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova - - Remove defunct configuration from the - [DEFAULT] section in the - /etc/nova/nova.conf file: - - [DEFAULT] -sql_connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova - - Add or modify the following keys in the - /etc/nova/nova.conf file: - - [keystone_authtoken] -auth_uri = http://controller:5000/v2.0 -auth_host = controller -auth_port = 35357 -auth_protocol = http -admin_tenant_name = service -admin_user = nova -admin_password = NOVA_PASS - - On all compute nodes, increase the DHCP lease time - (measured in seconds) in the - /etc/nova/nova.conf file to enable - currently active instances to continue leasing their IP - addresses during the upgrade process: - - [DEFAULT] -dhcp_lease_time = 86400 - - - - Setting this value too high might cause more dynamic - environments to run out of available IP addresses. Use an - appropriate value for your environment. - - - You must restart dnsmasq and the networking component of - Compute to enable the new DHCP lease time: - - # pkill -9 dnsmasq -# service nova-network restart - - Update the Cinder configuration on the controller and - storage nodes for compatibility with Havana. - - Add or modify the following key in the - /etc/cinder/cinder.conf file: - - [keystone_authtoken] -auth_uri = http://controller:5000 - - Update the dashboard configuration on the controller node - for compatibility with Havana. - - The dashboard installation procedure and configuration - file changed substantially between Grizzly and Havana. - Particularly, if you are running Django 1.5 or later, you must - ensure that - /etc/openstack-dashboard/local_settings - contains a correctly configured - key that contains a list of host names recognized by the - dashboard. - - If users access your dashboard by using - http://dashboard.example.com, define - , as follows: - - ALLOWED_HOSTS=['dashboard.example.com'] - - If users access your dashboard on the local system, define - , as follows: - - ALLOWED_HOSTS=['localhost'] - - If users access your dashboard by using an IP address in - addition to a host name, define - , as follows: - - ALLOWED_HOSTS=['dashboard.example.com', '192.168.122.200'] -
- -
- Upgrade Packages on the Controller Node - - Upgrade packages on the controller node to Havana, as - follows: - - # apt-get update -# apt-get dist-upgrade - - - Depending on your specific configuration, performing a - dist-upgrade might restart services - supplemental to your OpenStack environment. For example, if - you use Open-iSCSI for Block Storage volumes and the upgrade - includes a new open-scsi package, the package - manager restarts Open-iSCSI services, which might cause the - volumes for your users to be disconnected. - - - The package manager prompts you to update various - configuration files. Reject these changes. The package manager - appends .dpkg-dist to the newer versions - of existing configuration files. You should consider adopting - conventions associated with the newer configuration files and - merging them with your existing configuration files after - completing the upgrade process. -
- -
- Stop Services, Update Database Schemas, and Restart - Services on the Controller Node - - Stop each service, run the database synchronization - command if necessary to update the associated database schema, - and restart each service to apply the new configuration. Some - services require additional commands: - - - - OpenStack Identity - - - # service keystone stop -# keystone-manage token_flush -# keystone-manage db_sync -# service keystone start - - - - - OpenStack Image service - - - # service glance-api stop -# service glance-registry stop -# glance-manage db_sync -# service glance-api start -# service glance-registry start - - - - - OpenStack Compute - - - # service nova-api stop -# service nova-scheduler stop -# service nova-conductor stop -# service nova-cert stop -# service nova-consoleauth stop -# service nova-novncproxy stop -# nova-manage db sync -# service nova-api start -# service nova-scheduler start -# service nova-conductor start -# service nova-cert start -# service nova-consoleauth start -# service nova-novncproxy start - - - - - OpenStack Block Storage - - - # service cinder-api stop -# service cinder-scheduler stop -# cinder-manage db sync -# service cinder-api start -# service cinder-scheduler start - - - - - The controller node update is complete. Now you can - upgrade the compute nodes. -
- -
- Upgrade Packages and Restart Services on the Compute - Nodes - - Upgrade packages on the compute nodes to Havana: - - # apt-get update -# apt-get dist-upgrade - - - Make sure you have removed the repository for Grizzly - packages and added the repository for Havana - packages. - - - - Due to a packaging issue, this command might fail with - the following error: - - Errors were encountered while processing: - /var/cache/apt/archives/ - qemu-utils_1.5.0+dfsg-3ubuntu5~cloud0_amd64.deb - /var/cache/apt/archives/ - qemu-system-common_1.5.0+dfsg-3ubuntu5~cloud0_ - amd64.deb - E: Sub-process /usr/bin/dpkg - returned an error code (1) - - Fix this issue by running this command: - - # apt-get -f install - - - The packaging system prompts you to update the - /etc/nova/api-paste.ini file. As with - the controller upgrade, we recommend that you reject these - changes and review the .dpkg-dist file - after the upgrade process completes. - - To restart compute services: - - # service nova-compute restart -# service nova-network restart -# service nova-api-metadata restart -
- -
- Upgrade Packages and Restart Services on the Block - Storage Nodes - - Upgrade packages on the storage nodes to Havana: - - # apt-get update -# apt-get dist-upgrade - - - Make sure you have removed the repository for Grizzly - packages and added the repository for Havana - packages. - - - The packaging system prompts you to update the - /etc/cinder/api-paste.ini file. Like - the controller upgrade, reject these changes and review the - .dpkg-dist file after the the upgrade - process completes. - - - - To restart Block Storage services: - - # service cinder-volume restart -
-
- -
- How to Perform an Upgrade from Grizzly to Havana—Red Hat - Enterprise Linux and Derivatives - - - - For this section, we assume that you are starting with the - architecture provided in the OpenStack OpenStack - Installation Guide and upgrading to the - same architecture for Havana. All nodes should run Red Hat - Enterprise Linux 6.4 or compatible derivatives. Newer minor - releases should also work. This section primarily addresses - upgrading core OpenStack services, such as the Identity, - Image service, Compute including networking, Block Storage, - and the dashboard. - upgrading - Grizzly to Havana (Red Hat) - - -
- Impact on Users - - The upgrade process interrupts management of your - environment, including the dashboard. If you properly prepare - for this upgrade, tenant instances continue to operate - normally. -
- -
- Upgrade Considerations - - Always review the release notes - before performing an upgrade to learn about newly available - features that you might want to enable and deprecated features - that you should disable. -
- -
- Perform a Backup - - First, save the configuration files on all nodes: - # for i in keystone glance nova cinder openstack-dashboard; \ - do mkdir $i-grizzly; \ - done -# for i in keystone glance nova cinder openstack-dashboard; \ - do cp -r /etc/$i/* $i-grizzly/; \ - done - - You can modify this example script on each node to - handle different services. - - - Next, back up all databases on the controller: - - # mysqldump -u root -p --opt --add-drop-database \ - --all-databases > grizzly-db-backup.sql -
- -
- Manage Repositories - - On all nodes, remove the repository for Grizzly packages - and add the repository for Havana packages: - - # yum erase rdo-release-grizzly - # yum install \ - https://repos.fedorapeople.org/repos/openstack/EOL/openstack-havana/ \ - rdo-release-havana-7.noarch.rpm - - - Make sure any automatic updates are disabled. - - - - Consider checking for newer versions of the Havana - repository. - -
- -
- Update Configuration Files - - Update the glance configuration on the controller node for - compatibility with Havana. - - Add or modify the following keys in the - /etc/glance/glance-api.conf and - /etc/glance/glance-registry.conf - files: - - # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ - admin_user glance -# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken \ - admin_password GLANCE_PASS -# openstack-config --set /etc/glance/glance-api.conf paste_deploy \ - flavor keystone - - # openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ - auth_uri http://controller:5000 -# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ - admin_user glance -# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken \ - admin_password GLANCE_PASS -# openstack-config --set /etc/glance/glance-registry.conf paste_deploy \ - flavor keystone - - If currently present, remove the following key from the - [filter:authtoken] section in the - /etc/glance/glance-api-paste.ini and - /etc/glance/glance-registry-paste.ini - files: - - [filter:authtoken] -flavor = keystone - - Update the nova configuration on all nodes for - compatibility with Havana. - - Add the [database] section and - associated key to the /etc/nova/nova.conf - file: - - # openstack-config --set /etc/nova/nova.conf database \ - connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova - - Remove defunct database configuration from the - /etc/nova/nova.conf file: - - # openstack-config --del /etc/nova/nova.conf DEFAULT sql_connection - - Add or modify the following keys in the - /etc/nova/nova.conf file: - - # openstack-config --set /etc/nova/nova.conf keystone_authtoken \ - auth_uri http://controller:5000/v2.0 -# openstack-config --set /etc/nova/nova.conf keystone_authtoken \ - auth_host controller -# openstack-config --set /etc/nova/nova.conf keystone_authtoken \ - admin_tenant_name service -# openstack-config --set /etc/nova/nova.conf keystone_authtoken \ - admin_user nova -# openstack-config --set /etc/nova/nova.conf keystone_authtoken \ - admin_password NOVA_PASS - - On all compute nodes, increase the DHCP lease time - (measured in seconds) in the - /etc/nova/nova.conf file to enable - currently active instances to continue leasing their IP - addresses during the upgrade process, as follows: - - # openstack-config --set /etc/nova/nova.conf DEFAULT \ - dhcp_lease_time 86400 - - - Setting this value too high might cause more dynamic - environments to run out of available IP addresses. Use an - appropriate value for your environment. - - - - - You must restart dnsmasq and the nova networking service - to enable the new DHCP lease time: - - # pkill -9 dnsmasq -# service openstack-nova-network restart - - Update the cinder configuration on the controller and - storage nodes for compatibility with Havana. - - Add the [database] section and - associated key to the - /etc/cinder/cinder.conf file: - - # openstack-config --set /etc/cinder/cinder.conf database \ - connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - Remove defunct database configuration from the - /etc/cinder/cinder.conf file: - - # openstack-config --del /etc/cinder/cinder.conf DEFAULT sql_connection - - Add or modify the following key in the - /etc/cinder/cinder.conf file: - - # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \ - auth_uri http://controller:5000 - - Update the dashboard configuration on the controller node - for compatibility with Havana. - - The dashboard installation procedure and configuration - file changed substantially between Grizzly and Havana. - Particularly, if you are running Django 1.5 or later, you must - ensure that the - /etc/openstack-dashboard/local_settings - file contains a correctly configured - key that contains a list of - host names recognized by the dashboard. - - If users access your dashboard by using - http://dashboard.example.com, define - , as follows: - - ALLOWED_HOSTS=['dashboard.example.com'] - - If users access your dashboard on the local system, define - , as follows: - - ALLOWED_HOSTS=['localhost'] - - If users access your dashboard by using an IP address in - addition to a host name, define - , as follows: - - ALLOWED_HOSTS=['dashboard.example.com', '192.168.122.200'] -
- -
- Upgrade Packages on the Controller Node - - Upgrade packages on the controller node to Havana: - - # yum upgrade - - - Some services might terminate with an error during the - package upgrade process. If this error might cause a problem - with your environment, consider stopping all services before - upgrading them to Havana. - - - Install the OpenStack SELinux package on the controller - node: - - # yum install openstack-selinux - - - The package manager appends .rpmnew - to the end of newer versions of existing configuration - files. You should consider adopting conventions associated - with the newer configuration files and merging them with - your existing configuration files after completing the - upgrade process. - -
- -
- Stop Services, Update Database Schemas, and Restart - Services on the Controller Node - - Stop each service, run the database synchronization - command if necessary to update the associated database schema, - and restart each service to apply the new configuration. Some - services require additional commands: - - - - OpenStack Identity - - - # service openstack-keystone stop -# keystone-manage token_flush -# keystone-manage db_sync -# service openstack-keystone start - - - - - OpenStack Image service - - - # service openstack-glance-api stop -# service openstack-glance-registry stop -# glance-manage db_sync -# service openstack-glance-api start -# service openstack-glance-registry start - - - - - OpenStack Compute - - - # service openstack-nova-api stop -# service openstack-nova-scheduler stop -# service openstack-nova-conductor stop -# service openstack-nova-cert stop -# service openstack-nova-consoleauth stop -# service openstack-nova-novncproxy stop -# nova-manage db sync -# service openstack-nova-api start -# service openstack-nova-scheduler start -# service openstack-nova-conductor start -# service openstack-nova-cert start -# service openstack-nova-consoleauth start -# service openstack-nova-novncproxy start - - - - - OpenStack Block Storage - - - # service openstack-cinder-api stop -# service openstack-cinder-scheduler stop -# cinder-manage db sync -# service openstack-cinder-api start -# service openstack-cinder-scheduler start - - - - - The controller node update is complete. Now you can - upgrade the compute nodes. -
- -
- Upgrade Packages and Restart Services on the Compute - Nodes - - Upgrade packages on the compute nodes to Havana: - - # yum upgrade - - - Make sure you have removed the repository for Grizzly - packages and added the repository for Havana - packages. - - - Install the OpenStack SELinux package on the compute - nodes: - - # yum install openstack-selinux - - Restart compute services: - - # service openstack-nova-compute restart -# service openstack-nova-network restart -# service openstack-nova-metadata-api restart -
- -
- Upgrade Packages and Restart Services on the Block - Storage Nodes - - Upgrade packages on the storage nodes to Havana: - - # yum upgrade - - - Make sure you have removed the repository for Grizzly - packages and added the repository for Havana - packages. - - - Install the OpenStack SELinux package on the storage - nodes: - - # yum install openstack-selinux - - Restart Block Storage services: - - # service openstack-cinder-volume restart -
-
-
- How to Perform an Upgrade from Havana to - Icehouse—Ubuntu - - For this section, we assume that you are starting with the - architecture provided in the OpenStack Installation Guide - and upgrading to the same architecture for Icehouse. All nodes - should run Ubuntu 12.04 LTS with Linux kernel 3.11 and the - latest Havana packages installed and operational. This section - primarily addresses upgrading core OpenStack services such as - Identity (keystone), Image service (glance), Compute (nova), - Networking (neutron), Block Storage (cinder), and the dashboard. - The Networking upgrade includes conversion from the Open vSwitch - (OVS) plug-in to the Modular Layer 2 (M2) plug-in. This section - does not cover the upgrade process from Ubuntu 12.04 LTS to - Ubuntu 14.04 LTS. -
- Impact on Users - The upgrade process interrupts management of your - environment, including the dashboard. If you properly prepare - for this upgrade, tenant instances should continue to operate - normally. However, instances might experience intermittent - network interruptions while the Networking service rebuilds - virtual networking infrastructure. -
-
- Upgrade Considerations - - - Review the Icehouse Release Notes before you upgrade to learn - about new features that you might want to enable and - deprecated features that you should disable. - - - Consider adopting conventions associated with newer - configuration files and merging them with your existing - configuration files after completing the upgrade process. - - - Icehouse disables file injection by default per the - Icehouse Release Notes. - If you plan to deploy Icehouse in stages, you must - disable file injection on all compute nodes that remain on - Havana. This is done by editing the /etc/nova/nova-compute.conf - file: - [libvirt] -... -libvirt_inject_partition = -2 - - - You must convert the configuration for your - environment contained in the - /etc/neutron/neutron.conf and - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - files from OVS to ML2. For example, the OpenStack Installation - Guide covers ML2 plug-in configuration using GRE - tunnels. - Keep the OVS plug-in packages and configuration files - until you verify the upgrade. - - -
-
- Perform a Backup - - - Save the configuration files on all nodes: - # for i in keystone glance nova cinder neutron openstack-dashboard; \ -do mkdir $i-havana; \ -done -# for i in keystone glance nova cinder neutron openstack-dashboard; \ -do cp -r /etc/$i/* $i-havana/; \ -done - - You can modify this example script on each node to - handle different services. - - - - Back up all databases on the controller: - # mysqldump -u root -p --opt --add-drop-database --all-databases > havana-db-backup.sql - - Although not necessary, you should consider updating - your MySQL server configuration as described in the MySQL controller setup section of the OpenStack Installation - Guide. - - - -
-
- Manage Repositories - - Complete the following actions on all nodes. - - Remove the repository for Havana packages: - # apt-add-repository -r cloud-archive:havana - - - Add the repository for Icehouse packages: - # apt-add-repository cloud-archive:icehouse - - - Disable any automatic package updates. - - -
-
- Upgrade the Controller Node - - - Upgrade packages on the controller node to Icehouse: - # apt-get update -# apt-get dist-upgrade - - Depending on your specific configuration, performing a - dist-upgrade might restart services - supplemental to your OpenStack environment. For example, if - you use Open-iSCSI for Block Storage volumes and the upgrade - includes a new open-scsi package, the package - manager restarts Open-iSCSI services, which might cause the - volumes for your users to be disconnected. - - - - When the package manager prompts you to update various - configuration files, reject the changes. The package manager - appends .dpkg-dist to newer versions - of the configuration files. To find newer versions of - configuration files, enter the following command: - # find /etc -name *.dpkg-dist - - -
-
- Upgrade Each Service - The upgrade procedure for each service generally requires - that you stop the service, run the database synchronization - command to update the associated database, and start the - service to apply the new configuration. You will need administrator - privileges to perform these procedures. Some services will require - additional steps. - - Upgrade OpenStack Identity - - Edit the /etc/keystone/keystone.conf - file for compatibility for Icehouse: - - Add the [database] section. - Move the key from - the[sql] section to the - [database] section. - - - Stop the services: - # service keystone stop - - Upgrade the database: - # keystone-manage token_flush -# keystone-manage db_sync - - Start the services. - # service keystone start - - - - Upgrade OpenStack Image service - Before upgrading the Image service database, you must - convert the character set for each table to UTF-8. - Use the MySQL client to execute the following - commands: - # mysql -u root -p -mysql> SET foreign_key_checks = 0; -mysql> ALTER TABLE glance.image_locations CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.image_members CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.image_properties CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.image_tags CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.images CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.migrate_version CONVERT TO CHARACTER SET 'utf8'; -mysql> SET foreign_key_checks = 1; -mysql> exit - - Your environment might contain different or - additional tables that you must also convert to UTF-8 by - using similar commands. - - - - Edit the /etc/glance/glance-api.conf - and /etc/glance/glance-registry.conf - files for compatibility with Icehouse: - - Add the [database] section. - - Rename the key to - and move it to the - [database] section. - - - - In the /etc/glance/glance-api.conf - file, add RabbitMQ message broker keys to the - [DEFAULT] section. - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with - the password you chose for the guest - account in RabbitMQ. - - Stop the services: - # service glance-api stop -# service glance-registry stop - - Upgrade the database: - # glance-manage db_sync - - Start the services: - # service glance-api start -# service glance-registry start - - - - Upgrading OpenStack Compute - Edit the /etc/nova/nova.conf - file and change the key from - nova.rpc.impl_kombu to - rabbit. - Edit the /etc/nova/api-paste.ini - file and comment out or remove any keys in the - [filter:authtoken] section beneath the - paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory - statement. - - Stop the services: - # service nova-api stop -# service nova-scheduler stop -# service nova-conductor stop -# service nova-cert stop -# service nova-consoleauth stop -# service nova-novncproxy stop - - Upgrade the database: - # nova-manage db sync - - Start the services: - # service nova-api start -# service nova-scheduler start -# service nova-conductor start -# service nova-cert start -# service nova-consoleauth start -# service nova-novncproxy start - - - - Upgrade OpenStack Networking - Before upgrading the Networking database, you must - convert the character set for each table to UTF-8. - Use the MySQL client to execute the following - commands: - # mysql -u root -p -mysql> USE neutron; -mysql> SET foreign_key_checks = 0; -mysql> ALTER TABLE agents CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE alembic_version CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE allowedaddresspairs CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE dnsnameservers CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE externalnetworks CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE extradhcpopts CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE floatingips CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ipallocationpools CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ipallocations CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ipavailabilityranges CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE networkdhcpagentbindings CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE networks CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ovs_network_bindings CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ovs_tunnel_allocations CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ovs_tunnel_endpoints CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ovs_vlan_allocations CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE portbindingports CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ports CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE quotas CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE routerl3agentbindings CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE routerroutes CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE routers CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE securitygroupportbindings CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE securitygrouprules CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE securitygroups CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE servicedefinitions CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE servicetypes CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE subnetroutes CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE subnets CONVERT TO CHARACTER SET 'utf8'; -mysql> SET foreign_key_checks = 1; -mysql> exit - - Your environment might use a different database name. - Also, it might contain different or additional tables that - you must also convert to UTF-8 by using similar - commands. - - - Populate the /etc/neutron/plugins/ml2/ml2_conf.ini - file with the equivalent configuration for your - environment. - Do not edit the /etc/neutron/neutron.conf file - until after the conversion steps. - - - - Because the conversion script cannot roll back, - you must perform a database backup prior to executing - the following commands. - - Stop the service: - # service neutron-server stop - - Upgrade the database: - # neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini stamp havana -# neutron-db-manage --config-file /etc/neutron/neutron.conf \ ---config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade icehouse - - Perform the conversion from OVS to ML2: - # python -m neutron.db.migration.migrate_to_ml2 openvswitch \ - - Replace NEUTRON_DBPASS with - the password you chose for the database. - mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - Edit the - /etc/neutron/neutron.conf file to - use the ML2 plug-in and enable network change - notifications: - [DEFAULT] -... -core_plugin = ml2 -service_plugins = router -... -notify_nova_on_port_status_changes = True -notify_nova_on_port_data_changes = True -nova_url = http://controller:8774/v2 -nova_admin_username = nova -nova_admin_tenant_id = SERVICE_TENANT_ID -nova_admin_password = NOVA_PASS -nova_admin_auth_url = http://controller:35357/v2.0 - - Replace SERVICE_TENANT_ID - with the service tenant identifier (id) in the Identity - service and NOVA_PASS with the - password you chose for the nova user in - the Identity service. - - Start Networking services: - # service neutron-server start - - - - Upgrade OpenStack Block Storage - Stop services: - # service cinder-api stop -# service cinder-volume stop -# service cinder-scheduler stop - - Upgrade the database: - # cinder-manage db sync - - Start services: - # service cinder-api start -# service cinder-volume start -# service cinder-scheduler start - - - - Update Dashboard - Edit the /etc/openstack-dashboard/local_settings.py - file, and change the - key from "Member" to - "_member_". - - Restart Dashboard services: - # service apache2 restart - - -
-
- Upgrade the Network Node - - Upgrade packages on the network node to Icehouse: - - Make sure you have removed the repository for Havana - packages and added the repository for Icehouse - packages. - - # apt-get update -# apt-get dist-upgrade - - Edit the /etc/neutron/neutron.conf - file to use the ML2 plug-in: - [DEFAULT] -core_plugin = ml2 -service_plugins = router - - Populate the /etc/neutron/plugins/ml2/ml2_conf.ini - file with the equivalent configuration for your environment. - - Clean the active OVS configuration: - # service neutron-ovs-cleanup restart - - Restart Networking services: - # service neutron-dhcp-agent restart -# service neutron-l3-agent restart -# service neutron-metadata-agent restart -# service neutron-plugin-openvswitch-agent restart - - -
-
- Upgrade the Compute Nodes - - Upgrade packages on the compute nodes to Icehouse: - - Make sure you have removed the repository for Havana - packages and added the repository for Icehouse - packages. - - # apt-get update -# apt-get dist-upgrade - - Edit the /etc/neutron/neutron.conf - file to use the ML2 plug-in: - [DEFAULT] -core_plugin = ml2 -service_plugins = router - - Populate the /etc/neutron/plugins/ml2/ml2_conf.ini - file with the equivalent configuration for your - environment. - - Clean the active OVS configuration: - # service neutron-ovs-cleanup restart - - Restart Networking services: - # service neutron-plugin-openvswitch-agent restart - - Restart Compute services: - # service nova-compute restart - - -
-
- Upgrade the Storage Nodes - - Upgrade packages on the storage nodes to Icehouse: - - Make sure you have removed the repository for Havana - packages and added the repository for Icehouse - packages. - - # apt-get update -# apt-get dist-upgrade - - Restart Block Storage services. - # service cinder-volume restart - - -
- -
- How to Perform an Upgrade from Havana to Icehouse—Red Hat - Enterprise Linux and Derivatives +
+ Upgrade process - For this section, we assume that you are starting with the - architecture provided in the OpenStack OpenStack Installation Guide - and upgrading to the same architecture for Icehouse. All nodes - should run Red Hat Enterprise Linux 6.5 or compatible - derivatives such as CentOS and Scientific Linux with the latest - Havana packages installed and operational. This section - primarily addresses upgrading core OpenStack services such as - Identity (keystone), Image service (glance), Compute (nova), - Networking (neutron), Block Storage (cinder), and the dashboard. - The Networking upgrade procedure includes conversion from the - Open vSwitch (OVS) plug-in to the Modular Layer 2 (ML2) - plug-in. -
- Impact on Users - The upgrade process interrupts management of your - environment, including the dashboard. If you properly prepare - for this upgrade, tenant instances continue to operate - normally. However, instances might experience intermittent - network interruptions while the Networking service rebuilds - virtual networking infrastructure. -
-
- Upgrade Considerations - - - Review the Icehouse Release Notes before you upgrade to learn - about new features that you might want to enable and - deprecated features that you should disable. - - - Consider adopting conventions associated with newer - configuration files and merging them with your existing - configuration files after completing the upgrade process. - You can find newer versions of existing configuration files - with the following command: - # find /etc -name *.rpmnew - - - Icehouse disables file injection by default per the - Icehouse Release Notes. - If you plan to deploy Icehouse in stages, you must - disable file injection on all compute nodes that remain on - Havana. This is done by editing the /etc/nova/nova-compute.conf - file: - [libvirt] - ... - libvirt_inject_partition = -2 - - - You must convert the configuration for your - environment contained in the - /etc/neutron/neutron.conf and - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - files from OVS to ML2. For example, the OpenStack Installation - Guide covers ML2 plug-in configuration using GRE - tunnels. - Keep the OVS plug-in packages and configuration files - until you verify the upgrade. - - -
-
- Perform a Backup - - - Save the configuration files on all nodes: - # for i in keystone glance nova cinder neutron openstack-dashboard; \ -do mkdir $i-havana; \ -done -# for i in keystone glance nova cinder neutron openstack-dashboard; \ - do cp -r /etc/$i/* $i-havana/; \ - done - - You can modify this example script on each node to - handle different services. - - - - Back up all databases on the controller: - # mysqldump -u root -p --opt --add-drop-database --all-databases > havana-db-backup.sql - - You must update your MySQL server configuration and - restart the service as described in the MySQL controller setup section of the OpenStack Installation - Guide. - - - -
-
- Manage Repositories - - Complete the following actions on all nodes. - - Remove the repository for Havana packages: - # yum erase rdo-release-havana - - - Add the repository for Icehouse packages: - # yum install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/ \ - rdo-release-icehouse-3.noarch.rpm - - - Disable any automatic package updates. - - You should check for newer versions of the Icehouse repository. - - - -
-
- Upgrade the Controller Node - - - Upgrade packages on the controller node to Icehouse: - # yum upgrade - - The package manager appends .rpmnew - to the end of newer versions of existing configuration - files. - - - -
-
- Upgrade Each Service - The upgrade procedure for each service typically requires - that you stop the service, run the database synchronization - command to update the associated database, and start the - service to apply the new configuration. You will need administrator - privileges for these procedures. Some services will require - additional steps. - - Upgrade OpenStack Identity - - Edit the /etc/keystone/keystone.conf - file for compatibility for Icehouse: - - Add the [database] section. - Move the key from - the[sql] section to the - [database] section. - - - Stop the services: - # service openstack-keystone stop - # keystone-manage token_flush - - Upgrade the database: - # keystone-manage db_sync - - Start the services: - # service openstack-keystone start - - - - OpenStack Image service: - Before upgrading the Image service database, you must convert - the character set for each table to UTF-8. - Use the MySQL client to run the following - commands: - # mysql -u root -p -mysql> SET foreign_key_checks = 0; -mysql> ALTER TABLE glance.image_locations CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.image_members CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.image_properties CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.image_tags CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.images CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE glance.migrate_version CONVERT TO CHARACTER SET 'utf8'; -mysql> SET foreign_key_checks = 1; -mysql> exit - - Your environment might contain different or - additional tables that you must convert to UTF-8 by - using similar commands. - - - - Edit the /etc/glance/glance-api.conf - and /etc/glance/glance-registry.conf - files for compatibility with Icehouse: - - Add the [database] section. - - Rename the key to - and move it to the - [database] section. - - - - Edit the - /etc/glance/glance-api.conf - file, and add the Qpid message broker keys to the - [DEFAULT] section: - [DEFAULT] -... -rpc_backend = qpid -qpid_hostname = controller - - Stop the services: - Stop services, upgrade the database, and start - services: - # service openstack-glance-api stop -# service openstack-glance-registry stop - - Upgrade the database: - # glance-manage db_sync - - Start the services: - # service openstack-glance-api start -# service openstack-glance-registry start - - - - Upgrading OpenStack Compute: - Edit the /etc/nova/nova.conf - file and change the key from - nova.openstack.common.rpc.impl_qpid - to qpid. - Edit the /etc/nova/api-paste.ini - file and comment out or remove any keys in the - [filter:authtoken] section beneath - the paste.filter_factory = - keystoneclient.middleware.auth_token:filter_factory - statement. - Stop the services: - # service openstack-nova-api stop -# service openstack-nova-scheduler stop -# service openstack-nova-conductor stop -# service openstack-nova-cert stop -# service openstack-nova-consoleauth stop -# service openstack-nova-novncproxy stop - - Upgrade the database: - # nova-manage db sync - - Start the services: - # service openstack-nova-api start -# service openstack-nova-scheduler start -# service openstack-nova-conductor start -# service openstack-nova-cert start -# service openstack-nova-consoleauth start -# service openstack-nova-novncproxy start - - - - Upgrade OpenStack Networking - Before upgrading the Networking database, you must - convert the character set for each table to UTF-8. - Use the MySQL client to execute the following - commands: - # mysql -u root -p -mysql> USE neutron; -mysql> SET foreign_key_checks = 0; -mysql> ALTER TABLE agents CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE alembic_version CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE allowedaddresspairs CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE dnsnameservers CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE externalnetworks CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE extradhcpopts CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE floatingips CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ipallocationpools CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ipallocations CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ipavailabilityranges CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE networkdhcpagentbindings CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE networks CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ovs_network_bindings CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ovs_tunnel_allocations CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ovs_tunnel_endpoints CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ovs_vlan_allocations CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE portbindingports CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE ports CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE quotas CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE routerl3agentbindings CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE routerroutes CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE routers CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE securitygroupportbindings CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE securitygrouprules CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE securitygroups CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE servicedefinitions CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE servicetypes CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE subnetroutes CONVERT TO CHARACTER SET 'utf8'; -mysql> ALTER TABLE subnets CONVERT TO CHARACTER SET 'utf8'; -mysql> SET foreign_key_checks = 1; -mysql> exit - - Your environment might use a different database name. - Also, it might contain different or additional tables that - you must also convert to UTF-8 by using similar - commands. - - - Install the ML2 plug-in package: - # yum install openstack-neutron-ml2 - - Populate the - /etc/neutron/plugins/ml2/ml2_conf.ini - file with the equivalent configuration for your - environment. - Do not edit the /etc/neutron/neutron.conf - file until after the conversion steps. - - Change the /etc/neutron/plugin.ini - symbolic link to reference - /etc/neutron/plugins/ml2/ml2_conf.ini. - Stop services: - # service neutron-server stop - - Upgrade the database: - # neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini stamp havana -# neutron-db-manage --config-file /etc/neutron/neutron.conf \ ---config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade icehouse - Perform the conversion from OVS to ML2: - # python -m neutron.db.migration.migrate_to_ml2 openvswitch \ - - Replace NEUTRON_DBPASS with - the password you chose for the database. - mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - - - Because the conversion script cannot roll back, - you must perform a database backup prior to executing - the following commands. - - Stop the service: - # service neutron-server stop - - Upgrade the database: - # neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini stamp havana -# neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade icehouse - - Perform the conversion from OVS to ML2: - # python -m neutron.db.migration.migrate_to_ml2 openvswitch \ - mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - Edit the - /etc/neutron/neutron.conf file to - use the ML2 plug-in and enable network change - notifications: - [DEFAULT] -... -core_plugin = ml2 -service_plugins = router -... -notify_nova_on_port_status_changes = True -notify_nova_on_port_data_changes = True -nova_url = http://controller:8774/v2 -nova_admin_username = nova -nova_admin_tenant_id = SERVICE_TENANT_ID -nova_admin_password = NOVA_PASS -nova_admin_auth_url = http://controller:35357/v2.0 - - Replace SERVICE_TENANT_ID - with the service tenant identifier (id) in the Identity - service and NOVA_PASS with the - password you chose for the nova user in - the Identity service. - - Start Networking services. - # service neutron-server start - - - - Upgrade OpenStack Block Storage - Stop services: - # service openstack-cinder-api stop -# service openstack-cinder-volume stop -# service openstack-cinder-scheduler stop - - Upgrade the database: -# cinder-manage db sync - - Start services: - # service openstack-cinder-api start -# service openstack-cinder-volume start -# service openstack-cinder-scheduler start - - - - Update Dashboard - Edit the /etc/openstack-dashboard/local_settings - file and change the - key from "Member" to "_member_" - . - Restart Dashboard services: - # service httpd restart - - - The controller node update is complete. Now you can - upgrade the remaining nodes. -
-
- Upgrade the Network Node - - Upgrade packages on the network node to Icehouse: - - Make sure you have removed the repository for Havana - packages and added the repository for Icehouse - packages. - - # yum upgrade - - Install the ML2 plug-in package: - # yum install openstack-neutron-ml2 - - Edit the /etc/neutron/neutron.conf - file to use the ML2 plug-in: - [DEFAULT] -core_plugin = ml2 -service_plugins = router - - Populate the - /etc/neutron/plugins/ml2/ml2_conf.ini - file with the equivalent configuration for your - environment. - Change the /etc/neutron/plugin.ini - symbolic link to reference - /etc/neutron/plugins/ml2/ml2_conf.ini - . - Clean the active OVS configuration: - # service neutron-ovs-cleanup restart - - Restart Networking services: - # service neutron-dhcp-agent restart -# service neutron-l3-agent restart -# service neutron-metadata-agent restart -# service neutron-openvswitch-agent restart - - -
-
- Upgrade the Compute Nodes - - Upgrade packages on the compute nodes to Icehouse: - - Make sure you have removed the repository for Havana - packages and added the repository for Icehouse - packages. - - # yum upgrade - - Install the ML2 plug-in package: - # yum install openstack-neutron-ml2 - - Edit the /etc/neutron/neutron.conf - file to use the ML2 plug-in: - [DEFAULT] -core_plugin = ml2 -service_plugins = router - - Populate the - /etc/neutron/plugins/ml2/ml2_conf.ini - file with the equivalent configuration for your - environment. - - Change the /etc/neutron/plugin.ini - symbolic link to reference - /etc/neutron/plugins/ml2/ml2_conf.ini. - - Clean the active OVS configuration: - # service neutron-ovs-cleanup restart - - Restart Networking services: - # service neutron-openvswitch-agent restart - - Restart Compute services: - # service openstack-nova-compute restart - - -
-
- Upgrade the Storage Nodes - - Upgrade packages on the storage nodes to Icehouse: - - Make sure you have removed the repository for Havana - packages and added the repository for Icehouse - packages. - - # yum upgrade - - Restart Block Storage service: - # service openstack-cinder-volume restart - - -
-
- -
- How to Perform an Upgrade from Icehouse to Juno - - Use this procedure to upgrade a basic operational deployment of - the following services: Identity (keystone), Image service (glance), - Compute (nova), Networking (neutron), dashboard (horizon), Block - Storage (cinder), Orchestration (heat), and Telemetry (ceilometer). - This procedure references the basic three-node architecture in the - This section describes the process to upgrade a basic + OpenStack deployment based on the basic three-node architecture in the + OpenStack Installation Guide. All nodes - must run a supported distribution of Linux with a recent kernel and - latest Icehouse packages. -
- Before you begin + must run a supported distribution of Linux with a recent kernel and the + current release packages. +
+ Prerequisites - The upgrade process interrupts management of your environment - including the dashboard. If you properly prepare for the upgrade, - existing instances, networking, and storage should continue to - operate. However, instances might experience intermittent network - interruptions. + Perform some cleaning of the environment prior to + starting the upgrade process to ensure a consistent state. + For example, instances not fully purged from the system + after deletion might cause indeterminate behavior. - Review the - release notes before upgrading to learn about new, updated, - and deprecated features. - - - Consider adopting structure and options from Juno service - configuration files and merging them with existing configuration - files. The - OpenStack Configuration Reference - contains new, updated, and deprecated options for most - services. - - - For environments using the OpenStack Networking (neutron) - service, verify the Icehouse version of the database: + For environments using the OpenStack Networking + service (neutron), verify the release version of the database. For example: # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current" neutron -INFO [alembic.migration] Context impl MySQLImpl. -INFO [alembic.migration] Will assume non-transactional DDL. -Current revision for mysql+pymysql://neutron:XXXXX@controller/neutron: 5ac1c354a051 -> icehouse (head), icehouse + --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current" neutron
-
+
Perform a backup - Save the configuration files on all nodes: + Save the configuration files on all nodes. For example: # for i in keystone glance nova neutron openstack-dashboard cinder heat ceilometer; \ - do mkdir $i-icehouse; \ + do mkdir $i-kilo; \ done # for i in keystone glance nova neutron openstack-dashboard cinder heat ceilometer; \ - do cp -r /etc/$i/* $i-icehouse/; \ + do cp -r /etc/$i/* $i-kilo/; \ done You can modify this example script on each node to @@ -2131,65 +285,37 @@ Current revision for mysql+pymysql://neutron:XXXXX@controller/neutron: 5ac1c354a - Back up all databases on the controller: + Make a full database backup of your production data. As of + Kilo, database downgrades are not supported, and the only method + available to get back to a prior database version will be to restore + from backup. # mysqldump -u root -p --opt --add-drop-database --all-databases > icehouse-db-backup.sql Consider updating your SQL server configuration as described in the - OpenStack Installation Guide.
-
+
Manage repositories - Complete the following steps on all nodes. + On all nodes: - Remove the repository for Icehouse packages. + Remove the repository for the previous release packages. - On Ubuntu, follow these steps: - - - Add the repository for Juno packages: - # echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" \ - "trusty-updates/juno main" > /etc/apt/sources.list.d/cloudarchive-juno.list - - Remove any Ubuntu Cloud archive repositories for Icehouse - packages. You might also need to install or update the - ubuntu-cloud-keyring package. - - - - Update the repository database. - - + Add the repository for the new release packages. - On Red Hat Enterprise Linux (RHEL), CentOS, and Fedora, - follow these steps: - - - Remove the repository for Icehouse packages: - # yum erase rdo-release-icehouse - - - Add the repository for Juno packages: - # yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm - - - Update the repository database. - - + Update the repository database.
-
- Controller nodes -
- Upgrade packages to Juno +
+ Upgrade packages on each node Depending on your specific configuration, upgrading all packages might restart or break services supplemental to your OpenStack environment. For example, if you use the TGT iSCSI @@ -2200,620 +326,81 @@ Current revision for mysql+pymysql://neutron:XXXXX@controller/neutron: 5ac1c354a files, reject the changes. The package manager appends a suffix to newer versions of configuration files. Consider reviewing and adopting content from these files. + + You may need to explicitly install the ipset + package if your distribution does not install it as a + dependency.
-
+
Update services - To update a service, you generally modify one or more + To update a service on each node, you generally modify one or more configuration files, stop the service, synchronize the database schema, and start the service. Some services require different steps. We recommend verifying operation of each service before proceeding to the next service. - - All services - These configuration changes apply to all services. - - In any file containing the - [keystone_authtoken] section, modify Identity - service access to use the - option: - [keystone_authtoken] -... -identity_uri = http://controller:35357 - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - In any file containing the option, - modify it to explicitly use version 2.0: - auth_uri = http://controller:5000/v2.0 - - - - Identity service - - Edit the /etc/keystone/keystone.conf - file: - - - In the [token] section, configure the - UUID token provider and SQL driver: - [token] -... -provider = keystone.token.providers.uuid.Provider -driver = keystone.token.persistence.backends.sql.Token - - - - - Stop the service. - - - Clear expired tokens: - # su -s /bin/sh -c "keystone-manage token_flush" keystone - - - Synchronize the database schema: - # su -s /bin/sh -c "keystone-manage db_sync" keystone - - - Start the service. - - - - Image service - - Edit the /etc/glance/glance-api.conf - file: - - - Move the following options from the - [DEFAULT] section to the - [glance_store] section: - - - - - - - - - - These options must contain values. - - - - - - Stop the services. - - - Synchronize the database schema: - # su -s /bin/sh -c "glance-manage db_sync" glance - - - Start the services. - - - - Compute service - - Edit the /etc/nova/nova.conf - file: - - - In the [DEFAULT] section, rename - the option to - and move it to the - [glance] section. - - - In the [DEFAULT] section, rename - the following options and move them to the - [neutron] section: - - - - Old options - New options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Stop the services. - - - Synchronize the database schema: - # su -s /bin/sh -c "nova-manage db sync" nova - - - Start the services. - - - - Networking service - - Edit the /etc/neutron/neutron.conf - file: - - - In the [DEFAULT] section, change - the value of the option: - - neutron.openstack.common.rpc.impl_kombu - becomes rabbit - - - In the [DEFAULT] section, change - the value of the option: - - neutron.plugins.ml2.plugin.Ml2Plugin - becomes ml2 - - - In the [DEFAULT] section, change - the value or values of the - option to use short names. For example: - - neutron.services.l3_router.l3_router_plugin.L3RouterPlugin - becomes router - - - In the [DEFAULT] section, explicitly - define a value for the - option. For example: - [DEFAULT] -... -nova_region_name = regionOne - - - - - Stop the services. - - - Synchronize the database schema: - # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron - - - Start the services. - - - - Dashboard - In typical environments, updating the dashboard only requires - restarting the services. - - Restart the services. - - - - Block Storage service - - Edit the /etc/cinder/cinder.conf - file: - - - In the [DEFAULT] section, add - the following option: - my_ip = controller - - - - - Stop the services. - - - Synchronize the database schema: - # su -s /bin/sh -c "cinder-manage db sync" cinder - - - Start the services. - - - - Orchestration service - - Create the heat_stack_owner role if it - does not exist: - # keystone role-create --name heat_stack_owner - - - Edit the /etc/heat/heat.conf - file: - - - In the [DEFAULT] section, change - the value of the option: - - heat.openstack.common.rpc.impl_kombu - becomes rabbit - - - - - Stop the services. - - - Synchronize the database schema: - # su -s /bin/sh -c "heat-manage db_sync" heat - - - Start the services. - - - - Telemetry service - In typical environments, updating the Telemetry service - only requires restarting the services. - - Restart the services. - - -
-
-
+ The order you should upgrade services, and any changes from the + general upgrade process is described below: + + Controller node + + OpenStack Identity - Clear any expired tokens before + synchronizing the database. + + + OpenStack Image service + + + OpenStack Compute, including networking + components. + + + OpenStack Networking + + + OpenStack Block Storage + + + OpenStack dashboard - In typical environments, updating the + Telemetry service only requires restarting the service. + + + + OpenStack Orchestration + + + OpenStack Telemetry - In typical environments, updating the + Telemetry service only requires restarting the service. + + + + Network nodes -
- Upgrade packages to Juno - Explicitly install the ipset package - if your distribution does not install it as a - dependency. - Depending on your specific configuration, upgrading all - packages might restart or break services supplemental to your - OpenStack environment. For example, if you use the TGT iSCSI - framework for Block Storage volumes and the upgrade includes - new packages for it, the package manager might restart the - TGT iSCSI services and impact access to volumes. - If the package manager prompts you to update configuration - files, reject the changes. The package manager appends a - suffix to newer versions of configuration files. Consider - reviewing and adopting content from these files. -
-
- Update services - To update a service, you generally modify one or more - configuration files, stop the service, synchronize the - database schema, and start the service. Some services require - different steps. We recommend verifying operation of each - service before proceeding to the next service. - - All services - These configuration changes apply to all services. - - In any file containing the - [keystone_authtoken] section, modify Identity - service access to use the - option: - [keystone_authtoken] -... -identity_uri = http://controller:35357 - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - In any file containing the option, - modify it to explicitly use version 2.0: - auth_uri = http://controller:5000/v2.0 - - - - Networking service - - Edit the /etc/neutron/neutron.conf - file: - - - In the [DEFAULT] section, change - the value of the option: - - neutron.openstack.common.rpc.impl_kombu - becomes rabbit - - - In the [DEFAULT] section, change - the value of the option: - - neutron.plugins.ml2.plugin.Ml2Plugin - becomes ml2 - - - In the [DEFAULT] section, change - the value or values of the - option to use short names. For example: - - neutron.services.l3_router.l3_router_plugin.L3RouterPlugin - becomes router - - - In the [DEFAULT] section, explicitly - define a value for the - option. For example: - [DEFAULT] -... -nova_region_name = regionOne - - - In the [database] section, remove any - options because the Networking - service uses the message queue instead of direct access to - the database. - - - - - Restart the services. - - -
-
-
+ + OpenStack Compute - Edit the configuration file and restart the + service. + + + OpenStack Networking - Edit the configuration file and restart + the service. + + + Compute nodes -
- Upgrade packages to Juno - Explicitly install the ipset package - if your distribution does not install it as a - dependency. - Depending on your specific configuration, upgrading all - packages might restart or break services supplemental to your - OpenStack environment. For example, if you use the TGT iSCSI - framework for Block Storage volumes and the upgrade includes - new packages for it, the package manager might restart the - TGT iSCSI services and impact access to volumes. - If the package manager prompts you to update configuration - files, reject the changes. The package manager appends a - suffix to newer versions of configuration files. Consider - reviewing and adopting content from these files. -
-
- Update services - To update a service, you generally modify one or more - configuration files, stop the service, synchronize the - database schema, and start the service. Some services require - different steps. We recommend verifying operation of each - service before proceeding to the next service. - - All services - These configuration changes apply to all services. - - In any file containing the - [keystone_authtoken] section, modify Identity - service access to use the - option: - [keystone_authtoken] -... -identity_uri = http://controller:35357 - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - In any file containing the option, - modify it to explicitly use version 2.0: - auth_uri = http://controller:5000/v2.0 - - - - Compute service - - Edit the /etc/nova/nova.conf - file: - - - In the [DEFAULT] section, rename - the option to - and move it to the - [glance] section. - - - In the [DEFAULT] section, rename - the following options and move them to the - [neutron] section: - - - - Old options - New options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - In the [database] section, remove any - options because the Compute - service uses the message queue instead of direct access to - the database. - - - - - Restart the services. - - - - Networking service - - Edit the /etc/neutron/neutron.conf - file: - - - In the [DEFAULT] section, change - the value of the option: - - neutron.openstack.common.rpc.impl_kombu - becomes rabbit - - - In the [DEFAULT] section, change - the value of the option: - - neutron.plugins.ml2.plugin.Ml2Plugin - becomes ml2 - - - In the [DEFAULT] section, change - the value or values of the - option to use short names. For example: - - neutron.services.l3_router.l3_router_plugin.L3RouterPlugin - becomes router - - - In the [DEFAULT] section, explicitly - define a value for the - option. For example: - [DEFAULT] -... -nova_region_name = regionOne - - - In the [database] section, remove any - options because the Networking - service uses the message queue instead of direct access to - the database. - - - - - Restart the services. - - -
-
-
+ + OpenStack Block Storage - Updating the Block Storage service + only requires restarting the service. + + + Storage nodes -
- Upgrade packages to Juno - Depending on your specific configuration, upgrading all - packages might restart or break services supplemental to your - OpenStack environment. For example, if you use the TGT iSCSI - framework for Block Storage volumes and the upgrade includes - new packages for it, the package manager might restart the - TGT iSCSI services and impact access to volumes. - If the package manager prompts you to update configuration - files, reject the changes. The package manager appends a - suffix to newer versions of configuration files. Consider - reviewing and adopting content from these files. + + OpenStack Networking - Edit the configuration file and restart + the service. + +
-
- Update services - To update a service, you generally modify one or more - configuration files, stop the service, synchronize the - database schema, and start the service. Some services require - different steps. We recommend verifying operation of each - service before proceeding to the next service. - - All services - These configuration changes apply to all services. - - In any file containing the - [keystone_authtoken] section, modify Identity - service access to use the - option: - [keystone_authtoken] -... -identity_uri = http://controller:35357 - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - In any file containing the option, - modify it to explicitly use version 2.0: - auth_uri = http://controller:5000/v2.0 - - - - Block Storage service - In typical environments, updating the Block Storage service - only requires restarting the services. - - Restart the services. - - -
-
-
- -
- Cleaning Up and Final Configuration File Updates +
+ Final steps On all distributions, you must perform some final tasks to complete the upgrade process. upgrading @@ -2824,7 +411,7 @@ identity_uri = http://controller:35357/etc/nova/nova.conf on the compute nodes back to the original value for your environment. Update all .ini files to match - passwords and pipelines as required for Havana in your + passwords and pipelines as required for the OpenStack release in your environment. After migration, users see different results from nova image-list and glance @@ -2833,13 +420,14 @@ identity_uri = http://controller:35357/etc/nova/policy.json files to contain "context_is_admin": "role:admin", which limits access to private images for projects. - Thoroughly test the environment. Then, let your users - know that their cloud is running normally again. + Verify proper operation of your environment. Then, notify your users + that their cloud is operating normally again.
+
- Rolling Back a Failed Upgrade + Rolling back a failed upgrade Upgrades involve complex operations and can fail. Before attempting any upgrade, you should make a full database backup @@ -2848,8 +436,7 @@ identity_uri = http://controller:35357 This section provides guidance for rolling back to a previous - release of OpenStack. Although only tested on Ubuntu, other - distributions follow a similar procedure. rollbacks process for @@ -2858,20 +445,17 @@ identity_uri = http://controller:35357rolling back failures - In this section, we consider only the most immediate case: - you have taken down production management services in - preparation for an upgrade, completed part of the upgrade - process, discovered one or more problems not encountered during - testing, and you must roll back your environment to the original - "known good" state. Make sure that you did not make any state - changes after attempting the upgrade process: no new instances, - networks, storage volumes, and so on. Any of these new resources - will be in a zombie state after the databases are restored from - backup. + A common scenario is to take down production management services + in preparation for an upgrade, completed part of the upgrade process, + and discovered one or more problems not encountered during testing. + As a consequence, you must roll back your environment to the original + "known good" state. You also made sure that you did not make any state + changes after attempting the upgrade process; no new instances, networks, + storage volumes, and so on. Any of these new resources will be in a frozen + state after the databases are restored from backup. Within this scope, you must complete these steps to successfully roll back your environment: - Roll back configuration files. @@ -2886,53 +470,37 @@ identity_uri = http://controller:35357 - The upgrade instructions provided in earlier sections ensure - that you have proper backups of your databases and configuration - files. Read through this section carefully and verify that you + You should verify that you have the requisite backups to restore. Rolling back upgrades is a tricky process because distributions tend to put much more effort into testing upgrades than downgrades. Broken downgrades - often take significantly more effort to troubleshoot and, - hopefully, resolve than broken upgrades. Only you can weigh the - risks of trying to push a failed upgrade forward versus rolling - it back. Generally, consider rolling back as the very last - option. + take significantly more effort to troubleshoot and, resolve than + broken upgrades. Only you can weigh the risks of trying to push + a failed upgrade forward versus rolling it back. Generally, + consider rolling back as the very last option. The following steps described for Ubuntu have worked on at least one production environment, but they might not work for all environments. - To perform the rollback from Havana to Grizzly + To perform the rollback Stop all OpenStack services. - - Copy contents of configuration backup directories - /etc/<service>.grizzly that you + Copy contents of configuration backup directories that you created during the upgrade process back to - /etc/<service>: + /etc/<service> directory. - Restore databases from the - grizzly-db-backup.sql backup file + RELEASE_NAME-db-backup.sql backup file that you created with the mysqldump command during the upgrade process: - # mysql -u root -p < grizzly-db-backup.sql - - If you created this backup by using the - flag as instructed, - you can proceed to the next step. If you omitted this flag, - MySQL reverts all tables that existed in Grizzly, but does - not drop any tables created during the database migration - for Havana. In this case, you must manually determine which - tables to drop, and drop them to prevent issues with your - next upgrade attempt. + # mysql -u root -p < RELEASE_NAME-db-backup.sql @@ -3101,7 +669,7 @@ python-novaclient=1:2.13.0-0ubuntu1~cloud0 # apt-get install `cat openstack-grizzly-versions` This step completes the rollback procedure. You - should remove the Havana repository and run + should remove the upgrade release repository and run apt-get update to prevent accidental upgrades until you solve whatever issue caused you to roll back your environment.