Remove use_neutron from config

Nova network has been fully removed, remove use_neutron config
and related codes.

Change-Id: Ib9d87dd339d637b69fb27315d92228cbc523c8eb
Closes-Bug: #1693891
Implements: bp sahara-remove-nova-network
This commit is contained in:
zhangxuanyuan 2017-11-22 15:17:45 +08:00
parent de3c7a9efb
commit eaaa239240
20 changed files with 62 additions and 230 deletions

View File

@ -119,14 +119,11 @@ function configure_sahara {
database connection `database_connection_url sahara`
if is_service_enabled neutron; then
iniset $SAHARA_CONF_FILE DEFAULT use_neutron true
iniset $SAHARA_CONF_FILE neutron endpoint_type $SAHARA_ENDPOINT_TYPE
if is_ssl_enabled_service "neutron" \
|| is_service_enabled tls-proxy; then
iniset $SAHARA_CONF_FILE neutron ca_file $SSL_BUNDLE_FILE
fi
else
iniset $SAHARA_CONF_FILE DEFAULT use_neutron false
fi
if is_ssl_enabled_service "heat" || is_service_enabled tls-proxy; then

View File

@ -11,11 +11,10 @@ Custom network topologies
-------------------------
Sahara accesses instances at several stages of cluster spawning through
SSH and HTTP. Floating IPs and network namespaces
(see :ref:`neutron-nova-network`) will be automatically used for
access when present. When floating IPs are not assigned to instances and
namespaces are not being used, sahara will need an alternative method to
reach them.
SSH and HTTP. Floating IPs and network namespaces will be automatically
used for access when present. When floating IPs are not assigned to
instances and namespaces are not being used, sahara will need an
alternative method to reach them.
The ``proxy_command`` parameter of the configuration file can be used to
give sahara a command to access instances. This command is run on the
@ -363,13 +362,13 @@ Indirect instance access through proxy nodes
Sahara needs to access instances through SSH during cluster setup. This
access can be obtained a number of different ways (see
:ref:`neutron-nova-network`, :ref:`floating_ip_management`,
:ref:`custom_network_topologies`). Sometimes it is impossible to provide
access to all nodes (because of limited numbers of floating IPs or security
policies). In these cases access can be gained using other nodes of the
cluster as proxy gateways. To enable this set ``is_proxy_gateway=true``
for the node group you want to use as proxy. Sahara will communicate with
all other cluster instances through the instances of this node group.
:ref:`floating_ip_management`,:ref:`custom_network_topologies`).Sometimes
it is impossible to provide access to all nodes (because of limited
numbers of floating IPs or security policies). In these cases access can
be gained using other nodes of the cluster as proxy gateways. To enable
this set ``is_proxy_gateway=true`` for the node group you want to use as
proxy. Sahara will communicate with all other cluster instances through
the instances of this node group.
Note, if ``use_floating_ips=true`` and the cluster contains a node group with
``is_proxy_gateway=true``, the requirement to have ``floating_ip_pool``

View File

@ -52,21 +52,12 @@ Next you will configure the default Networking service. If using
neutron for networking the following parameter should be set
in the ``[DEFAULT]`` section:
.. sourcecode:: cfg
use_neutron=true
If you are using nova-network for networking then this parameter should
be set to ``false``.
With these parameters set, sahara is ready to run.
By default the sahara's log level is set to INFO. If you wish to increase
the logging levels for troubleshooting, set ``debug`` to ``true`` in the
``[DEFAULT]`` section of the configuration file.
.. _neutron-nova-network:
Networking configuration
------------------------
@ -77,7 +68,6 @@ be used to enable their usage.
.. sourcecode:: cfg
[DEFAULT]
use_neutron=True
use_namespaces=True
.. note::
@ -85,10 +75,6 @@ be used to enable their usage.
instance and namespaces are used, some additional configuration is
required, please see :ref:`non-root-users` for more information.
If an OpenStack cluster uses the deprecated nova-network,
then the ``use_neutron`` parameter should be set to ``False`` in the
sahara configuration file.
.. _floating_ip_management:
Floating IP management
@ -102,11 +88,6 @@ to use floating IP addresses for access. This is controlled by the
has two options for ensuring that the instances in the node groups
templates that requires floating IPs gain a floating IP address:
* If using the nova-network, it may be configured to assign floating
IP addresses automatically by setting the ``auto_assign_floating_ip``
parameter to ``True`` in the nova configuration file
(usually ``nova.conf``).
* The user may specify a floating IP address pool for each node
group that requires floating IPs directly.
@ -122,9 +103,7 @@ use both.
If not using floating IP addresses (``use_floating_ips=False``) sahara
will use fixed IP addresses for instance management. When using neutron
for the Networking service the user will be able to choose the
fixed IP network for all instances in a cluster. Whether using nova-network
or neutron it is important to ensure that all instances running sahara
have access to the fixed IP networks.
fixed IP network for all instances in a cluster.
.. _notification-configuration:

View File

@ -93,8 +93,8 @@ Set the proper values for host and url variables:
OPENSTACK_HOST = "ip of your controller"
..
If you are using Nova-Network with ``auto_assign_floating_ip=True`` add the
following parameter:
If you wish to disable floating IP options during node group template
creation, add the following parameter:
.. sourcecode:: python

View File

@ -10,14 +10,12 @@ The sample configuration file is available `from the Horizon repository. <https:
1. Networking
-------------
Depending on the Networking backend (Nova Network or Neutron) used in the
Depending on the Networking backend (Neutron) used in the
cloud, Sahara panels will determine automatically which input fields should be
displayed.
While using Nova Network backend the cloud may be configured to automatically
assign floating IPs to instances. If Sahara service is configured to use those
automatically assigned floating IPs the same configuration should be done to
the dashboard through the ``SAHARA_AUTO_IP_ALLOCATION_ENABLED`` parameter.
If you wish to disable floating IP options during node group template
creation, add the following parameter:
Example:
@ -26,7 +24,6 @@ Example:
SAHARA_AUTO_IP_ALLOCATION_ENABLED = True
..
2. Different endpoint
---------------------

View File

@ -368,8 +368,7 @@ below further specifies which fields are filled at which moment.
| | | to make them accessible for user. |
+----------------------------+--------+---------------------------------------+
| neutron_management_network | string | Neutron network ID. Instances will |
| | | get fixed IPs in this network if |
| | | 'use_neutron' config is set to True. |
| | | get fixed IPs in this network. |
+----------------------------+--------+---------------------------------------+
| anti_affinity | list | List of processes that will be run on |
| | | different hosts. |

View File

@ -137,10 +137,7 @@ Standby NameNode.
Networking support
------------------
Sahara supports both the nova-network and neutron implementations of
OpenStack Networking. By default sahara is configured to behave as if
the nova-network implementation is available. For OpenStack installations
that are using the neutron project please see :ref:`neutron-nova-network`.
Sahara supports neutron implementations of OpenStack Networking.
Object Storage support
----------------------
@ -190,7 +187,7 @@ The following table provides a plugin capability matrix:
+--------------------------+---------+----------+----------+-------+
| Feature/Plugin | Vanilla | HDP | Cloudera | Spark |
+==========================+=========+==========+==========+=======+
| Nova and Neutron network | x | x | x | x |
| Neutron network | x | x | x | x |
+--------------------------+---------+----------+----------+-------+
| Cluster Scaling | x | x | x | x |
+--------------------------+---------+----------+----------+-------+

View File

@ -8,18 +8,12 @@
#port=8386
# If set to True, Sahara will use floating IPs to communicate
# with instances. To make sure that all instances have
# floating IPs assigned in Nova Network set
# "auto_assign_floating_ip=True" in nova.conf.If Neutron is
# used for networking, make sure that all Node Groups have
# "floating_ip_pool" parameter defined. (boolean value)
# with instances. If Neutron is used for networking, make
# sure that all Node Groups have "floating_ip_pool" parameter
# defined. (boolean value)
#use_floating_ips=true
# Use Neutron or Nova Network (boolean value)
#use_neutron=true
# Use network namespaces for communication (only valid to use in conjunction
# with use_neutron=True)
# Use network namespaces for communication
#use_namespaces=false
# Use Designate for internal and external hostnames resolution (boolean value)

View File

@ -0,0 +1,6 @@
---
upgrade:
- |
Nova network has been fully removed from the OpenStack codebase,
remove all instances of switches on use_neutron and the
configuration value.

View File

@ -71,23 +71,15 @@ networking_opts = [
default=True,
help='If set to True, Sahara will use floating IPs to '
'communicate with instances. To make sure that all '
'instances have floating IPs assigned in Nova Network '
'set "auto_assign_floating_ip=True" in nova.conf. '
'If Neutron is used for networking, make sure that '
'all Node Groups have "floating_ip_pool" parameter '
'defined.'),
'instances have floating IPs assigned, make sure '
'that all Node Groups have "floating_ip_pool" '
'parameter defined.'),
cfg.StrOpt('node_domain',
default='novalocal',
help="The suffix of the node's FQDN. In nova-network that is "
"the dhcp_domain config parameter."),
cfg.BoolOpt('use_neutron',
default=True,
help="Use Neutron Networking (False indicates the use of Nova "
"networking)."),
help="The suffix of the node's FQDN."),
cfg.BoolOpt('use_namespaces',
default=False,
help="Use network namespaces for communication (only valid to "
"use in conjunction with use_neutron=True)."),
help="Use network namespaces for communication."),
cfg.BoolOpt('use_rootwrap',
default=False,
help="Use rootwrap facility to allow non-root users to run "
@ -227,15 +219,3 @@ def parse_configs(conf_files=None):
raise ex.ConfigurationError(
_("Option '%(option)s' is required for config group '%(group)s'") %
{'option': roe.opt_name, 'group': roe.group.name})
validate_configs()
def validate_network_configs():
if CONF.use_namespaces and not CONF.use_neutron:
raise ex.ConfigurationError(
_('use_namespaces can not be set to "True" when use_neutron '
'is set to "False"'))
def validate_configs():
validate_network_configs()

View File

@ -320,25 +320,15 @@ class ClusterStack(object):
security_group_name = g.generate_auto_security_group_name(ng)
security_group_description = self._asg_for_node_group_description(ng)
if CONF.use_neutron:
res_type = "OS::Neutron::SecurityGroup"
desc_key = "description"
rules_key = "rules"
create_rule = lambda ip_version, cidr, proto, from_port, to_port: {
"ethertype": "IPv{}".format(ip_version),
"remote_ip_prefix": cidr,
"protocol": proto,
"port_range_min": six.text_type(from_port),
"port_range_max": six.text_type(to_port)}
else:
res_type = "AWS::EC2::SecurityGroup"
desc_key = "GroupDescription"
rules_key = "SecurityGroupIngress"
create_rule = lambda _, cidr, proto, from_port, to_port: {
"CidrIp": cidr,
"IpProtocol": proto,
"FromPort": six.text_type(from_port),
"ToPort": six.text_type(to_port)}
res_type = "OS::Neutron::SecurityGroup"
desc_key = "description"
rules_key = "rules"
create_rule = lambda ip_version, cidr, proto, from_port, to_port: {
"ethertype": "IPv{}".format(ip_version),
"remote_ip_prefix": cidr,
"protocol": proto,
"port_range_min": six.text_type(from_port),
"port_range_max": six.text_type(to_port)}
rules = self._serialize_auto_security_group_rules(ng, create_rule)
@ -362,12 +352,11 @@ class ClusterStack(object):
rules.append(create_rule(6, '::/0', 'tcp', SSH_PORT, SSH_PORT))
# open all traffic for private networks
if CONF.use_neutron:
for cidr in neutron.get_private_network_cidrs(ng.cluster):
ip_ver = 6 if ':' in cidr else 4
for protocol in ['tcp', 'udp']:
rules.append(create_rule(ip_ver, cidr, protocol, 1, 65535))
rules.append(create_rule(ip_ver, cidr, 'icmp', 0, 255))
for cidr in neutron.get_private_network_cidrs(ng.cluster):
ip_ver = 6 if ':' in cidr else 4
for protocol in ['tcp', 'udp']:
rules.append(create_rule(ip_ver, cidr, protocol, 1, 65535))
rules.append(create_rule(ip_ver, cidr, 'icmp', 0, 255))
return rules

View File

@ -112,14 +112,9 @@ def _update_limits_for_ng(limits, ng, count):
if ng.auto_security_group:
limits['security_groups'] += sign(count)
# NOTE: +3 - all traffic for private network
if CONF.use_neutron:
limits['security_group_rules'] += (
(len(ng.open_ports) + 3) * sign(count))
else:
limits['security_group_rules'] = max(
limits['security_group_rules'], len(ng.open_ports) + 3)
if CONF.use_neutron:
limits['ports'] += count
limits['security_group_rules'] += (
(len(ng.open_ports) + 3) * sign(count))
limits['ports'] += count
def _get_avail_limits():
@ -146,23 +141,11 @@ def _get_nova_limits():
limits['cpu'] = _sub_limit(lim['maxTotalCores'], lim['totalCoresUsed'])
limits['instances'] = _sub_limit(lim['maxTotalInstances'],
lim['totalInstancesUsed'])
if CONF.use_neutron:
return limits
# tmckay-fp here we would just get the limits all the time
limits['floatingips'] = _sub_limit(lim['maxTotalFloatingIps'],
lim['totalFloatingIpsUsed'])
limits['security_groups'] = _sub_limit(lim['maxSecurityGroups'],
lim['totalSecurityGroupsUsed'])
limits['security_group_rules'] = _sub_limit(lim['maxSecurityGroupRules'],
0)
return limits
def _get_neutron_limits():
limits = {}
if not CONF.use_neutron:
return limits
neutron = neutron_client.client()
tenant_id = context.ctx().tenant_id
total_lim = b.execute_with_retries(neutron.show_quota, tenant_id)['quota']

View File

@ -196,15 +196,7 @@ def check_security_groups_exist(security_groups):
def check_floatingip_pool_exists(pool_id):
network = None
if CONF.use_neutron:
network = neutron.get_network(pool_id)
else:
# tmckay-fp, whoa, this suggests that we allow floating_ip_pools with
# nova? Can that be true? Scour for this
for net in nova.client().floating_ip_pools.list():
if net.name == pool_id:
network = net.name
break
network = neutron.get_network(pool_id)
if not network:
raise ex.NotFoundException(pool_id, _("Floating IP pool %s not found"))

View File

@ -83,15 +83,10 @@ def _check_cluster_create(data):
neutron_net_id = _get_cluster_field(data, 'neutron_management_network')
if neutron_net_id:
if not CONF.use_neutron:
raise ex.InvalidReferenceException(
_("'neutron_management_network' field can't be used "
"with 'use_neutron=False'"))
b.check_network_exists(neutron_net_id)
else:
if CONF.use_neutron:
raise ex.NotFoundException('neutron_management_network',
_("'%s' field is not found"))
raise ex.NotFoundException('neutron_management_network',
_("'%s' field is not found"))
def _get_cluster_field(cluster, field):

View File

@ -24,9 +24,8 @@ from sahara.tests.unit import testutils as tu
class BaseTestClusterTemplate(base.SaharaWithDbTestCase):
"""Checks valid structure of Resources section in generated Heat templates.
1. It checks templates generation with different OpenStack
network installations: Neutron, NovaNetwork with floating Ip auto
assignment set to True or False.
1. It checks templates generation with OpenStack network
installation: Neutron.
2. Cinder volume attachments.
3. Basic instances creations with multi line user data provided.
4. Anti-affinity feature with proper nova scheduler hints included
@ -120,8 +119,7 @@ class TestClusterTemplate(BaseTestClusterTemplate):
actual = heat_template._get_security_groups(ng1)
self.assertEqual([], actual)
def _generate_auto_security_group_template(self, use_neutron):
self.override_config('use_neutron', use_neutron)
def _generate_auto_security_group_template(self):
ng1, ng2 = self._make_node_groups('floating')
cluster = self._make_cluster('private_net', ng1, ng2)
ng1['cluster'] = cluster
@ -163,32 +161,7 @@ class TestClusterTemplate(BaseTestClusterTemplate):
} for rule in expected_rules]
}
}}
actual = self._generate_auto_security_group_template(True)
self.assertEqual(expected, actual)
def test_serialize_auto_security_group_nova_network(self):
expected = {'cluster-master-1': {
'type': 'AWS::EC2::SecurityGroup',
'properties': {
'GroupDescription': 'Data Processing Cluster by Sahara\n'
'Sahara cluster name: cluster\n'
'Sahara engine: heat.3.0\n'
'Auto security group for Sahara '
'Node Group: master',
'SecurityGroupIngress': [{
'ToPort': '22',
'CidrIp': '0.0.0.0/0',
'FromPort': '22',
'IpProtocol': 'tcp'
}, {
'ToPort': '22',
'CidrIp': '::/0',
'FromPort': '22',
'IpProtocol': 'tcp'
}]
}
}}
actual = self._generate_auto_security_group_template(False)
actual = self._generate_auto_security_group_template()
self.assertEqual(expected, actual)
@mock.patch("sahara.conductor.objects.Cluster.use_designate_feature")

View File

@ -87,7 +87,6 @@ class TestNetworks(base.SaharaTestCase):
def test_init_instances_ips_neutron_with_floating(
self, nova, upd):
self.override_config('use_neutron', True)
server = mock.Mock(id='serv_id')
server.addresses = {
'network': [
@ -111,7 +110,6 @@ class TestNetworks(base.SaharaTestCase):
def test_init_instances_ips_neutron_without_floating(
self, nova, upd):
self.override_config('use_neutron', True)
self.override_config('use_floating_ips', False)
server = mock.Mock(id='serv_id')
server.addresses = {

View File

@ -198,7 +198,6 @@ class TestQuotas(base.SaharaTestCase):
type(ng).open_ports = mock.PropertyMock(return_value=[1111, 2222])
limits = quotas._get_zero_limits()
self.override_config('use_neutron', True)
quotas._update_limits_for_ng(limits, ng, 3)
self.assertEqual(3, limits['instances'])
@ -211,22 +210,9 @@ class TestQuotas(base.SaharaTestCase):
self.assertEqual(5, limits['security_group_rules'])
self.assertEqual(3, limits['ports'])
type(ng).open_ports = mock.PropertyMock(return_value=[1, 2, 3])
self.override_config('use_neutron', False)
quotas._update_limits_for_ng(limits, ng, 3)
self.assertEqual(6, limits['security_group_rules'])
self.assertEqual(3, limits['ports'])
@mock.patch('sahara.utils.openstack.nova.client',
return_value=FakeNovaClient(nova_limits))
def test_get_nova_limits(self, nova):
self.override_config('use_neutron', False)
self.assertEqual(
{'cpu': 10, 'floatingips': 200,
'instances': 3, 'ram': 9, 'security_group_rules': 'unlimited',
'security_groups': 28}, quotas._get_nova_limits())
self.override_config('use_neutron', True)
self.assertEqual(
{'cpu': 10, 'instances': 3, 'ram': 9}, quotas._get_nova_limits())
@ -239,9 +225,6 @@ class TestQuotas(base.SaharaTestCase):
@mock.patch('sahara.utils.openstack.neutron.client',
return_value=FakeNeutronClient(neutron_limits))
def test_neutron_limits(self, neutron):
self.override_config('use_neutron', False)
self.assertEqual({}, quotas._get_neutron_limits())
self.override_config('use_neutron', True)
self.assertEqual({'floatingips': 2340,
'ports': 'unlimited',
'security_group_rules': 332,

View File

@ -143,7 +143,6 @@ class TestClusterCreateValidation(u.ValidationTestCase):
)
def test_cluster_create_v_wrong_network(self):
self.override_config("use_neutron", True)
self._assert_create_object_validation(
data={
'name': "test-name",
@ -157,24 +156,7 @@ class TestClusterCreateValidation(u.ValidationTestCase):
"94ce-b6df85a68332 not found")
)
def test_cluster_create_mixed_nova_neutron(self):
self.override_config("use_neutron", False)
self._assert_create_object_validation(
data={
'name': "test-name",
'plugin_name': "fake",
'hadoop_version': "0.1",
'default_image_id': '550e8400-e29b-41d4-a716-446655440000',
'neutron_management_network': '53a36917-ab9f-4589-'
'94ce-b6df85a68332'
},
bad_req_i=(1, 'INVALID_REFERENCE',
"'neutron_management_network' field can't "
"be used with 'use_neutron=False'")
)
def test_cluster_create_v_missing_network(self):
self.override_config("use_neutron", True)
self._assert_create_object_validation(
data={
'name': "test-name",
@ -215,7 +197,6 @@ class TestClusterCreateValidation(u.ValidationTestCase):
self._assert_cluster_configs_validation(True)
def test_cluster_create_v_right_data(self):
self.override_config("use_neutron", True)
self._assert_create_object_validation(
data={
'name': "testname",
@ -237,7 +218,6 @@ class TestClusterCreateValidation(u.ValidationTestCase):
self._assert_cluster_default_image_tags_validation()
def test_cluster_create_security_groups(self):
self.override_config("use_neutron", True)
self._assert_create_object_validation(
data={
'name': "testname",
@ -262,7 +242,6 @@ class TestClusterCreateValidation(u.ValidationTestCase):
)
def test_cluster_create_missing_floating_pool(self):
self.override_config("use_neutron", True)
self._assert_create_object_validation(
data={
'name': "testname",
@ -294,7 +273,6 @@ class TestClusterCreateValidation(u.ValidationTestCase):
)
def test_cluster_create_with_proxy_gateway(self):
self.override_config("use_neutron", True)
self._assert_create_object_validation(
data={
'name': "testname",
@ -327,7 +305,6 @@ class TestClusterCreateValidation(u.ValidationTestCase):
)
def test_cluster_create_security_groups_by_ids(self):
self.override_config("use_neutron", True)
self._assert_create_object_validation(
data={
'name': "testname",
@ -352,7 +329,6 @@ class TestClusterCreateValidation(u.ValidationTestCase):
)
def test_cluster_missing_security_groups(self):
self.override_config("use_neutron", True)
self._assert_create_object_validation(
data={
'name': "testname",

View File

@ -27,7 +27,7 @@ def has_floating_ip(instance):
# ip, but a simple comparison with the internal_ip
# corresponds with the logic in
# sahara.service.networks.init_instances_ips
if CONF.use_neutron and not instance.node_group.floating_ip_pool:
if not instance.node_group.floating_ip_pool:
return False
# in the neutron case comparing ips is an extra simple check ...

View File

@ -719,8 +719,6 @@ class InstanceInteropHelper(remote.Remote):
# fp -- just compare to internal?
# in the neutron case, we check the node group for the
# access_instance and look for fp
# in the nova case, we compare management_ip to internal_ip or even
# use the nova interface
elif CONF.use_namespaces and not net_utils.has_floating_ip(
access_instance):
# Build a session through a netcat socket in the Neutron namespace
@ -803,13 +801,10 @@ class InstanceInteropHelper(remote.Remote):
proxy_command = CONF.proxy_command
# tmckay-fp again we can check the node group for the instance
# what are the implications for nova here? None, because use_namespaces
# is synonomous with use_neutron
# this is a test on whether access_instance has a floating_ip
# what are the implications for nova here? None.
# This is a test on whether access_instance has a floating_ip
# in the neutron case, we check the node group for the
# access_instance and look for fp
# in the nova case, we compare management_ip to internal_ip or even
# use the nova interface
elif (CONF.use_namespaces and not net_utils.has_floating_ip(
access_instance)):
# need neutron info