KuryrNetworkPolicy CRD

This commit is a huge refactoring of how we handle network policies. In
general:

* KuryrNetPolicy is replaced by KuryrNetworkPolicy. The upgrade path
  is handled in the constructor of KuryrNetworkPolicyHandler.
* New CRD has spec and status properties. spec is always populated by
  NetworkPolicyHandler. status is handled by KuryrNetworkPolicyHandler.
  This means that in order to trigger SG rules recalculation on Pod ang
  Service events, the NetworkPolicy is "bumped" with a dummy annotation.
* NetworkPolicyHandler injects finalizers onto NetworkPolicy and
  KuryrNetworkPolicy objects, so that objects cannot get removed before
  KuryrNetworkPolicyHandler won't process deletion correctly.

Depends-On: https://review.opendev.org/742209
Change-Id: Iafc982e590ada0cd9d82e922c103583e4304e9ce
This commit is contained in:
Michał Dulko 2020-03-11 11:27:09 +01:00
parent 3eeb67d84e
commit a1708e1c76
26 changed files with 1286 additions and 1574 deletions

View File

@ -99,7 +99,7 @@
vars:
devstack_localrc:
DOCKER_CGROUP_DRIVER: "systemd"
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace
devstack_services:
@ -120,7 +120,7 @@
vars:
devstack_localrc:
KURYR_SUBNET_DRIVER: namespace
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy
KURYR_USE_PORT_POOLS: true
KURYR_POD_VIF_DRIVER: neutron-vif
@ -134,7 +134,7 @@
parent: kuryr-kubernetes-tempest-containerized
vars:
devstack_localrc:
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace

View File

@ -98,7 +98,7 @@
KURYR_LB_ALGORITHM: SOURCE_IP_PORT
KURYR_SUBNET_DRIVER: namespace
KURYR_SG_DRIVER: policy
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
voting: false
- job:
@ -144,7 +144,7 @@
KURYR_ENFORCE_SG_RULES: false
KURYR_LB_ALGORITHM: SOURCE_IP_PORT
KURYR_HYPERKUBE_VERSION: v1.16.0
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: true

View File

@ -452,6 +452,7 @@ rules:
- kuryrnets
- kuryrnetworks
- kuryrnetpolicies
- kuryrnetworkpolicies
- kuryrloadbalancers
- kuryrports
- apiGroups: ["networking.k8s.io"]

View File

@ -975,6 +975,7 @@ function update_tempest_conf_file {
iniset $TEMPEST_CONFIG kuryr_kubernetes kuryrnetworks True
iniset $TEMPEST_CONFIG kuryr_kubernetes kuryrports True
iniset $TEMPEST_CONFIG kuryr_kubernetes kuryrloadbalancers True
iniset $TEMPEST_CONFIG kuryr_kubernetes new_kuryrnetworkpolicy_crd True
}
source $DEST/kuryr-kubernetes/devstack/lib/kuryr_kubernetes

View File

@ -47,22 +47,22 @@ The network policy CRD has the following format:
.. code-block:: yaml
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
kind: KuryrNetworkPolicy
metadata:
...
spec:
egressSgRules:
- security_group_rule:
- sgRule:
...
ingressSgRules:
- security_group_rule:
...
networkpolicy_spec:
- sgRule:
...
podSelector:
...
status:
securityGroupId: ...
securityGroupName: ...
podSelector: ...
securityGroupRules: ...
A new handler has been added to react to Network Policy events, and the existing
ones, for instance service/pod handlers, have been modified to account for the
@ -201,26 +201,25 @@ are assumed to assumed to affect Ingress.
.. code-block:: yaml
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
kind: KuryrNetworkPolicy
metadata:
name: np-default-deny
name: default-deny
namespace: default
...
spec:
egressSgRules:
- security_group_rule:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 60a0d59c-2102-43e0-b025-75c98b7d9315
security_group_id: 20d9b623-f1e0-449d-95c1-01624cb3e315
ingressSgRules: []
networkpolicy_spec:
...
podSelector:
...
status:
securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315
securityGroupName: sg-default-deny
securityGroupRules: ...
podSelector: ...
Allow traffic from pod
@ -263,37 +262,33 @@ restriction was enforced.
.. code-block:: yaml
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
kind: KuryrNetworkPolicy
metadata:
name: np-allow-monitoring-via-pod-selector
name: allow-monitoring-via-pod-selector
namespace: default
...
spec:
egressSgRules:
- security_group_rule:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 203a14fe-1059-4eff-93ed-a42bd957145d
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
ingressSgRules:
- namespace: default
security_group_rule:
sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
id: 7987c382-f2a9-47f7-b6e8-1a3a1bcb7d95
port_range_max: 8080
port_range_min: 8080
protocol: tcp
remote_ip_prefix: 10.0.1.143
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
networkpolicy_spec:
...
podSelector:
...
status:
securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
securityGroupName: sg-allow-monitoring-via-pod-selector
securityGroupRules: ...
podSelector: ...
Allow traffic from namespace
@ -337,36 +332,32 @@ egress rule allowing traffic to everywhere.
.. code-block:: yaml
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
name: np-allow-test-via-ns-selector
kind: KuryrNetworkPolicy
name: allow-test-via-ns-selector
namespace: default
...
spec:
egressSgRules:
- security_group_rule:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 8c21bf42-c8b9-4628-b0a1-bd0dbb192e6b
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
ingressSgRules:
- namespace: dev
security_group_rule:
sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
id: 2a33b802-56ad-430a-801d-690f653198ef
port_range_max: 8080
port_range_min: 8080
protocol: tcp
remote_ip_prefix: 10.0.1.192/26
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
networkpolicy_spec:
...
podSelector:
...
status:
securityGroupId: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
securityGroupName: sg-allow-test-via-ns-selector
securityGroupRules: ...
podSelector: ...
.. note::

View File

@ -95,6 +95,7 @@ Edit ``kuryr.conf``:
- kuryrnets
- kuryrnetworks
- kuryrnetpolicies
- kuryrnetworkpolicies
- kuryrloadbalancers
- apiGroups: ["networking.k8s.io"]
resources:

View File

@ -10,7 +10,7 @@ be found at :doc:`./devstack/containerized`):
.. code-block:: ini
[kubernetes]
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetwork,kuryrnetpolicy
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetwork,kuryrnetworkpolicy
Note that if you also want to enable prepopulation of ports pools upon new
namespace creation, you need to also dd the kuryrnetwork_population handler
@ -19,7 +19,7 @@ namespace creation, you need to also dd the kuryrnetwork_population handler
.. code-block:: ini
[kubernetes]
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy,kuryrnetwork,kuryrnetwork_population
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetworkpolicy,kuryrnetwork,kuryrnetwork_population
After that, enable also the security group drivers for policies:
@ -82,7 +82,7 @@ to add the policy, pod_label and namespace handler and drivers with:
.. code-block:: bash
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetworkpolicy
KURYR_SG_DRIVER=policy
KURYR_SUBNET_DRIVER=namespace
@ -143,9 +143,9 @@ Testing the network policy support functionality
.. code-block:: console
$ kubectl get kuryrnetpolicies
$ kubectl get kuryrnetworkpolicies
NAME AGE
np-test-network-policy 2s
test-network-policy 2s
$ kubectl get networkpolicies
NAME POD-SELECTOR AGE
@ -158,69 +158,42 @@ Testing the network policy support functionality
.. code-block:: console
$ kubectl get kuryrnetpolicy np-test-network-policy -o yaml
$ kubectl get kuryrnetworkpolicy test-network-policy -o yaml
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
kind: KuryrNetworkPolicy
metadata:
annotations:
networkpolicy_name: test-network-policy
networkpolicy_namespace: default
networkpolicy_uid: aee1c59f-c634-11e8-b63d-002564fdd760
networkPolicyLink: <link>
clusterName: ""
creationTimestamp: 2018-10-02T11:17:02Z
generation: 0
name: np-test-network-policy
name: test-network-policy
namespace: default
resourceVersion: "2117"
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetpolicies/np-test-network-policy
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetworkpolicies/test-network-policy
uid: afb99326-c634-11e8-b63d-002564fdd760
spec:
egressSgRules:
- security_group_rule:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 6297c198-b385-44f3-8b43-29951f933a8f
port_range_max: 5978
port_range_min: 5978
protocol: tcp
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
ingressSgRules:
- security_group_rule:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
id: f4e11e73-81c6-4c1b-9760-714eedff417b
port_range_max: 6379
port_range_min: 6379
protocol: tcp
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
status:
securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5
securityGroupName: sg-test-network-policy
networkpolicy_spec:
egress:
- to:
- namespaceSelector:
matchLabels:
project: default
ports:
- port: 5978
protocol: TCP
ingress:
- from:
- namespaceSelector:
matchLabels:
project: default
ports:
- port: 6379
protocol: TCP
podSelector:
matchLabels:
project: default
policyTypes:
- Ingress
- Egress
securityGroupRules:
$ openstack security group rule list sg-test-network-policy --protocol tcp -c "IP Protocol" -c "Port Range" -c "Direction" --long
+-------------+------------+-----------+
@ -273,67 +246,41 @@ Testing the network policy support functionality
$ kubectl patch networkpolicy test-network-policy -p '{"spec":{"ingress":[{"ports":[{"port": 8080,"protocol": "TCP"}]}]}}'
networkpolicy "test-network-policy" patched
$ kubectl get knp np-test-network-policy -o yaml
$ kubectl get knp test-network-policy -o yaml
apiVersion: openstack.org/v1
kind: KuryrNetPolicy
kind: KuryrNetworkPolicy
metadata:
annotations:
networkpolicy_name: test-network-policy
networkpolicy_namespace: default
networkpolicy_uid: aee1c59f-c634-11e8-b63d-002564fdd760
networkPolicyLink: <link>
clusterName: ""
creationTimestamp: 2018-10-02T11:17:02Z
generation: 0
name: np-test-network-policy
name: test-network-policy
namespace: default
resourceVersion: "1546"
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetpolicies/np-test-network-policy
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetworkpolicies/np-test-network-policy
uid: afb99326-c634-11e8-b63d-002564fdd760
spec:
egressSgRules:
- security_group_rule:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
id: 1969a0b3-55e1-43d7-ba16-005b4ed4cbb7
port_range_max: 5978
port_range_min: 5978
protocol: tcp
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
ingressSgRules:
- security_group_rule:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
id: 6598aa1f-4f94-4fb2-81ce-d3649ba28f33
port_range_max: 8080
port_range_min: 8080
protocol: tcp
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
status:
securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5
networkpolicy_spec:
egress:
- ports:
- port: 5978
protocol: TCP
to:
- namespaceSelector:
matchLabels:
project: default
ingress:
- ports:
- port: 8080
protocol: TCP
from:
- namespaceSelector:
matchLabels:
project: default
podSelector:
matchLabels:
project: default
policyTypes:
- Ingress
- Egress
securityGroupRules:
$ openstack security group rule list sg-test-network-policy -c "IP Protocol" -c "Port Range" -c "Direction" --long
+-------------+------------+-----------+
@ -388,6 +335,6 @@ Testing the network policy support functionality
.. code-block:: console
$ kubectl delete -f network_policy.yml
$ kubectl get kuryrnetpolicies
$ kubectl get kuryrnetworkpolicies
$ kubectl get networkpolicies
$ openstack security group list | grep sg-test-network-policy

View File

@ -9,8 +9,6 @@ spec:
plural: kuryrnetpolicies
singular: kuryrnetpolicy
kind: KuryrNetPolicy
shortNames:
- knp
versions:
- name: v1
served: true

View File

@ -0,0 +1,158 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: kuryrnetworkpolicies.openstack.org
spec:
group: openstack.org
scope: Namespaced
names:
plural: kuryrnetworkpolicies
singular: kuryrnetworkpolicy
kind: KuryrNetworkPolicy
shortNames:
- knp
versions:
- name: v1
served: true
storage: true
additionalPrinterColumns:
- name: SG-ID
type: string
description: The ID of the SG associated to the policy
jsonPath: .status.securityGroupId
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
schema:
openAPIV3Schema:
type: object
required:
- status
- spec
properties:
spec:
type: object
required:
- egressSgRules
- ingressSgRules
- podSelector
- policyTypes
properties:
egressSgRules:
type: array
items:
type: object
required:
- sgRule
properties:
affectedPods:
type: array
items:
type: object
properties:
podIP:
type: string
podNamespace:
type: string
required:
- podIP
- podNamespace
namespace:
type: string
sgRule:
type: object
properties:
description:
type: string
direction:
type: string
ethertype:
type: string
port_range_max:
type: integer
port_range_min:
type: integer
protocol:
type: string
remote_ip_prefix:
type: string
ingressSgRules:
type: array
items:
type: object
required:
- sgRule
properties:
affectedPods:
type: array
items:
type: object
properties:
podIP:
type: string
podNamespace:
type: string
required:
- podIP
- podNamespace
namespace:
type: string
sgRule:
type: object
properties:
description:
type: string
direction:
type: string
ethertype:
type: string
port_range_max:
type: integer
port_range_min:
type: integer
protocol:
type: string
remote_ip_prefix:
type: string
podSelector:
x-kubernetes-preserve-unknown-fields: true
type: object
policyTypes:
type: array
items:
type: string
status:
type: object
required:
- securityGroupRules
properties:
securityGroupId:
type: string
securityGroupRules:
type: array
items:
type: object
required:
- id
properties:
id:
type: string
description:
type: string
direction:
type: string
ethertype:
type: string
port_range_max:
type: integer
port_range_min:
type: integer
protocol:
type: string
remote_ip_prefix:
type: string
security_group_id:
type: string
podSelector:
x-kubernetes-preserve-unknown-fields: true
type: object

View File

@ -23,9 +23,11 @@ K8S_API_CRD_NAMESPACES = K8S_API_CRD + '/namespaces'
K8S_API_CRD_KURYRNETS = K8S_API_CRD + '/kuryrnets'
K8S_API_CRD_KURYRNETWORKS = K8S_API_CRD + '/kuryrnetworks'
K8S_API_CRD_KURYRNETPOLICIES = K8S_API_CRD + '/kuryrnetpolicies'
K8S_API_CRD_KURYRNETWORKPOLICIES = K8S_API_CRD + '/kuryrnetworkpolicies'
K8S_API_CRD_KURYRLOADBALANCERS = K8S_API_CRD + '/kuryrloadbalancers'
K8S_API_CRD_KURYRPORTS = K8S_API_CRD + '/kuryrports'
K8S_API_POLICIES = '/apis/networking.k8s.io/v1/networkpolicies'
K8S_API_NETWORKING = '/apis/networking.k8s.io/v1'
K8S_API_NPWG_CRD = '/apis/k8s.cni.cncf.io/v1'
@ -37,6 +39,7 @@ K8S_OBJ_POLICY = 'NetworkPolicy'
K8S_OBJ_KURYRNET = 'KuryrNet'
K8S_OBJ_KURYRNETWORK = 'KuryrNetwork'
K8S_OBJ_KURYRNETPOLICY = 'KuryrNetPolicy'
K8S_OBJ_KURYRNETWORKPOLICY = 'KuryrNetworkPolicy'
K8S_OBJ_KURYRLOADBALANCER = 'KuryrLoadBalancer'
K8S_OBJ_KURYRPORT = 'KuryrPort'
@ -47,11 +50,13 @@ K8S_POD_STATUS_FAILED = 'Failed'
K8S_ANNOTATION_PREFIX = 'openstack.org/kuryr'
K8S_ANNOTATION_VIF = K8S_ANNOTATION_PREFIX + '-vif'
K8S_ANNOTATION_LABEL = K8S_ANNOTATION_PREFIX + '-pod-label'
K8S_ANNOTATION_IP = K8S_ANNOTATION_PREFIX + '-pod-ip'
K8S_ANNOTATION_NAMESPACE_LABEL = K8S_ANNOTATION_PREFIX + '-namespace-label'
K8S_ANNOTATION_LBAAS_SPEC = K8S_ANNOTATION_PREFIX + '-lbaas-spec'
K8S_ANNOTATION_LBAAS_STATE = K8S_ANNOTATION_PREFIX + '-lbaas-state'
K8S_ANNOTATION_NET_CRD = K8S_ANNOTATION_PREFIX + '-net-crd'
K8S_ANNOTATION_NETPOLICY_CRD = K8S_ANNOTATION_PREFIX + '-netpolicy-crd'
K8S_ANNOTATION_POLICY = K8S_ANNOTATION_PREFIX + '-counter'
K8S_ANNOTATION_NPWG_PREFIX = 'k8s.v1.cni.cncf.io'
K8S_ANNOTATION_NPWG_NETWORK = K8S_ANNOTATION_NPWG_PREFIX + '/networks'
@ -68,6 +73,7 @@ POD_FINALIZER = KURYR_FQDN + '/pod-finalizer'
KURYRNETWORK_FINALIZER = 'kuryrnetwork.finalizers.kuryr.openstack.org'
KURYRLB_FINALIZER = 'kuryr.openstack.org/kuryrloadbalancer-finalizers'
SERVICE_FINALIZER = 'kuryr.openstack.org/service-finalizer'
NETWORKPOLICY_FINALIZER = 'kuryr.openstack.org/networkpolicy-finalizer'
KURYRPORT_FINALIZER = KURYR_FQDN + '/kuryrport-finalizer'
KURYRPORT_LABEL = KURYR_FQDN + '/nodeName'

View File

@ -697,13 +697,10 @@ class NetworkPolicyDriver(DriverBase, metaclass=abc.ABCMeta):
ALIAS = 'network_policy'
@abc.abstractmethod
def ensure_network_policy(self, policy, project_id):
def ensure_network_policy(self, policy):
"""Policy created or updated
:param policy: dict containing Kubernetes NP object
:param project_id: openstack project_id
:returns: list of Pod objects affected by the network policy
creation or its podSelector modification
"""
raise NotImplementedError()
@ -711,7 +708,7 @@ class NetworkPolicyDriver(DriverBase, metaclass=abc.ABCMeta):
def release_network_policy(self, kuryrnetpolicy):
"""Delete a network policy
:param kuryrnetpolicy: dict containing Kuryrnetpolicy CRD object
:param kuryrnetpolicy: dict containing NetworkPolicy object
"""
raise NotImplementedError()
@ -729,18 +726,6 @@ class NetworkPolicyDriver(DriverBase, metaclass=abc.ABCMeta):
"""
raise NotImplementedError()
@abc.abstractmethod
def knps_on_namespace(self, namespace):
"""Check if there si kuryr network policy CRDs on the namespace
This method returns true if there are knps on the specified namespace
or false otherwise
:param namespace: namespace name where the knps CRDs should be
:returns: true if knps CRDs on the namespace, false otherwise
"""
raise NotImplementedError()
@abc.abstractmethod
def namespaced_pods(self, policy):
"""Return pods on the policy namespace
@ -752,15 +737,6 @@ class NetworkPolicyDriver(DriverBase, metaclass=abc.ABCMeta):
"""
raise NotImplementedError()
@abc.abstractmethod
def get_kuryrnetpolicy_crd(self, policy):
"""Return kuryrnetpolicy CRD object associated to the policy
:param policy: dict containing Kubernetes NP object
:returns: kuryrnetpolicy CRD object associated to the policy
"""
raise NotImplementedError()
class NetworkPolicyProjectDriver(DriverBase, metaclass=abc.ABCMeta):
"""Get an OpenStack project id for K8s network policies"""

View File

@ -749,24 +749,41 @@ class LBaaSv2Driver(base.LBaaSDriver):
endpoints_link = utils.get_endpoints_link(service)
k8s = clients.get_kubernetes_client()
try:
endpoint = k8s.get(endpoints_link)
k8s.get(endpoints_link)
except k_exc.K8sResourceNotFound:
LOG.debug("Endpoint not Found. Skipping LB SG update for "
"%s as the LB resources are not present", lbaas_name)
return
lbaas = utils.get_lbaas_state(endpoint)
if not lbaas:
LOG.debug('Endpoint not yet annotated with lbaas state.')
try:
klb = k8s.get(f'{k_const.K8S_API_CRD_NAMESPACES}/{svc_namespace}/'
f'kuryrloadbalancers/{svc_name}')
except k_exc.K8sResourceNotFound:
LOG.debug('No KuryrLoadBalancer for service %s created yet.',
lbaas_name)
raise k_exc.ResourceNotReady(svc_name)
lbaas_obj = lbaas.loadbalancer
lbaas_obj.security_groups = sgs
if (not klb.get('status', {}).get('loadbalancer') or
klb.get('status', {}).get('listeners') is None):
LOG.debug('KuryrLoadBalancer for service %s not populated yet.',
lbaas_name)
raise k_exc.ResourceNotReady(svc_name)
utils.set_lbaas_state(endpoint, lbaas)
klb['status']['loadbalancer']['security_groups'] = sgs
lb = klb['status']['loadbalancer']
try:
k8s.patch_crd('status/loadbalancer', klb['metadata']['selfLink'],
{'security_groups': sgs})
except k_exc.K8sResourceNotFound:
LOG.debug('KuryrLoadBalancer CRD not found %s', lbaas_name)
return
except k_exc.K8sClientException:
LOG.exception('Error updating KuryLoadBalancer CRD %s', lbaas_name)
raise
lsnr_ids = {(listener['protocol'], listener['port']): listener['id']
for listener in lbaas.listeners}
for listener in klb['status']['listeners']}
for port in svc_ports:
port_protocol = port['protocol']
@ -779,6 +796,6 @@ class LBaaSv2Driver(base.LBaaSDriver):
"%s and port %s. Skipping", port_protocol,
lbaas_port)
continue
self._apply_members_security_groups(lbaas_obj, lbaas_port,
self._apply_members_security_groups(lb, lbaas_port,
target_port, port_protocol,
sg_rule_name, listener_id, sgs)

View File

@ -38,91 +38,94 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
self.os_net = clients.get_network_client()
self.kubernetes = clients.get_kubernetes_client()
def ensure_network_policy(self, policy, project_id):
def ensure_network_policy(self, policy):
"""Create security group rules out of network policies
Triggered by events from network policies, this method ensures that
security groups and security group rules are created or updated in
reaction to kubernetes network policies events.
In addition it returns the pods affected by the policy:
- Creation: pods on the namespace of the created policy
- Update: pods that needs to be updated in case of PodSelector
modification, i.e., the pods that were affected by the previous
PodSelector
KuryrNetworkPolicy object is created with the security group rules
definitions required to represent the NetworkPolicy.
"""
LOG.debug("Creating network policy %s", policy['metadata']['name'])
if self.get_kuryrnetpolicy_crd(policy):
previous_selector = (
self.update_security_group_rules_from_network_policy(policy))
if previous_selector or previous_selector == {}:
return self.affected_pods(policy, previous_selector)
if previous_selector is None:
return self.namespaced_pods(policy)
i_rules, e_rules = self._get_security_group_rules_from_network_policy(
policy)
knp = self._get_knp_crd(policy)
if not knp:
self._create_knp_crd(policy, i_rules, e_rules)
else:
self.create_security_group_rules_from_network_policy(policy,
project_id)
self._patch_knp_crd(policy, i_rules, e_rules, knp)
def update_security_group_rules_from_network_policy(self, policy):
"""Update security group rules
def _convert_old_sg_rule(self, rule):
del rule['security_group_rule']['id']
del rule['security_group_rule']['security_group_id']
result = {
'sgRule': rule['security_group_rule'],
}
This method updates security group rules based on CRUD events gotten
from a configuration or patch to an existing network policy
if 'namespace' in rule:
result['namespace'] = rule['namespace']
if 'remote_ip_prefixes' in rule:
result['affectedPods'] = []
for ip, namespace in rule['remote_ip_prefixes']:
result['affectedPods'].append({
'podIP': ip,
'podNamespace': namespace,
})
return result
def get_from_old_crd(self, netpolicy):
name = netpolicy['metadata']['name'][3:] # Remove 'np-'
namespace = netpolicy['metadata']['namespace']
link = (f'{constants.K8S_API_NETWORKING}/namespaces/{namespace}/'
f'networkpolicies/{name}')
knp = {
'apiVersion': constants.K8S_API_CRD_VERSION,
'kind': constants.K8S_OBJ_KURYRNETWORKPOLICY,
'metadata': {
'namespace': namespace,
'name': name,
'annotations': {
'networkPolicyLink': link,
},
'finalizers': [constants.NETWORKPOLICY_FINALIZER],
},
'spec': {
'podSelector':
netpolicy['spec']['networkpolicy_spec']['podSelector'],
'egressSgRules': [self._convert_old_sg_rule(r) for r in
netpolicy['spec']['egressSgRules']],
'ingressSgRules': [self._convert_old_sg_rule(r) for r in
netpolicy['spec']['ingressSgRules']],
'policyTypes':
netpolicy['spec']['networkpolicy_spec']['policyTypes'],
},
'status': {
'podSelector': netpolicy['spec']['podSelector'],
'securityGroupId': netpolicy['spec']['securityGroupId'],
# We'll just let KuryrNetworkPolicyHandler figure out if rules
# are created on its own.
'securityGroupRules': [],
},
}
return knp
def _get_security_group_rules_from_network_policy(self, policy):
"""Get security group rules required to represent an NP
This method creates the security group rules bodies coming out of a
network policies' parsing.
"""
crd = self.get_kuryrnetpolicy_crd(policy)
crd_name = crd['metadata']['name']
LOG.debug("Already existing CRD %s", crd_name)
sg_id = crd['spec']['securityGroupId']
# Fetch existing SG rules from kuryrnetpolicy CRD
existing_sg_rules = []
existing_i_rules = crd['spec'].get('ingressSgRules')
existing_e_rules = crd['spec'].get('egressSgRules')
if existing_i_rules or existing_e_rules:
existing_sg_rules = existing_i_rules + existing_e_rules
existing_pod_selector = crd['spec'].get('podSelector')
# Parse network policy update and get new ruleset
i_rules, e_rules = self.parse_network_policy_rules(policy, sg_id)
current_sg_rules = i_rules + e_rules
# Get existing security group rules ids
sgr_ids = [x['security_group_rule'].pop('id') for x in
existing_sg_rules]
# SG rules that are meant to be kept get their id back
sg_rules_to_keep = [existing_sg_rules.index(rule) for rule in
existing_sg_rules if rule in current_sg_rules]
for sg_rule in sg_rules_to_keep:
sgr_id = sgr_ids[sg_rule]
existing_sg_rules[sg_rule]['security_group_rule']['id'] = sgr_id
# Delete SG rules that are no longer in the updated policy
sg_rules_to_delete = [existing_sg_rules.index(rule) for rule in
existing_sg_rules if rule not in
current_sg_rules]
for sg_rule in sg_rules_to_delete:
driver_utils.delete_security_group_rule(sgr_ids[sg_rule])
# Create new rules that weren't already on the security group
sg_rules_to_add = [rule for rule in current_sg_rules if rule not in
existing_sg_rules]
for sg_rule in sg_rules_to_add:
sgr_id = driver_utils.create_security_group_rule(sg_rule)
if sg_rule['security_group_rule'].get('direction') == 'ingress':
for i_rule in i_rules:
if sg_rule == i_rule:
i_rule["security_group_rule"]["id"] = sgr_id
else:
for e_rule in e_rules:
if sg_rule == e_rule:
e_rule["security_group_rule"]["id"] = sgr_id
# Annotate kuryrnetpolicy CRD with current policy and ruleset
pod_selector = policy['spec'].get('podSelector')
driver_utils.patch_kuryrnetworkpolicy_crd(crd, i_rules, e_rules,
pod_selector,
np_spec=policy['spec'])
i_rules, e_rules = self.parse_network_policy_rules(policy)
# Add default rules to allow traffic from host and svc subnet
i_rules += self._get_default_np_rules()
if existing_pod_selector != pod_selector:
return existing_pod_selector
return False
return i_rules, e_rules
def _add_default_np_rules(self, sg_id):
def _get_default_np_rules(self):
"""Add extra SG rule to allow traffic from svcs and host.
This method adds the base security group rules for the NP security
@ -130,6 +133,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
- Ensure traffic is allowed from the services subnet
- Ensure traffic is allowed from the host
"""
rules = []
default_cidrs = []
if CONF.octavia_defaults.enforce_sg_rules:
default_cidrs.append(utils.get_subnet_cidr(
@ -141,27 +145,21 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
ethertype = constants.IPv4
if ipaddress.ip_network(cidr).version == constants.IP_VERSION_6:
ethertype = constants.IPv6
default_rule = {
'security_group_rule': {
rules.append({
'sgRule': {
'ethertype': ethertype,
'security_group_id': sg_id,
'direction': 'ingress',
'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'remote_ip_prefix': cidr
}}
driver_utils.create_security_group_rule(default_rule)
'remote_ip_prefix': cidr,
}})
def create_security_group_rules_from_network_policy(self, policy,
project_id):
"""Create initial security group and rules
return rules
This method creates the initial security group for hosting security
group rules coming out of network policies' parsing.
"""
sg_name = ("sg-" + policy['metadata']['namespace'] + "-" +
policy['metadata']['name'])
desc = "Kuryr-Kubernetes NetPolicy SG"
sg = None
def create_security_group(self, knp, project_id):
sg_name = ("sg-" + knp['metadata']['namespace'] + "-" +
knp['metadata']['name'])
desc = ("Kuryr-Kubernetes Network Policy %s SG" %
utils.get_res_unique_name(knp))
try:
# Create initial security group
sg = self.os_net.create_security_group(name=sg_name,
@ -176,46 +174,14 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
# rules just after creation.
for sgr in sg.security_group_rules:
self.os_net.delete_security_group_rule(sgr['id'])
i_rules, e_rules = self.parse_network_policy_rules(policy, sg.id)
for i_rule in i_rules:
sgr_id = driver_utils.create_security_group_rule(i_rule)
i_rule['security_group_rule']['id'] = sgr_id
for e_rule in e_rules:
sgr_id = driver_utils.create_security_group_rule(e_rule)
e_rule['security_group_rule']['id'] = sgr_id
# Add default rules to allow traffic from host and svc subnet
self._add_default_np_rules(sg.id)
except (os_exc.SDKException, exceptions.ResourceNotReady):
LOG.exception("Error creating security group for network policy "
" %s", policy['metadata']['name'])
# If there's any issue creating sg rules, remove them
if sg:
self.os_net.delete_security_group(sg.id)
" %s", knp['metadata']['name'])
raise
try:
self._add_kuryrnetpolicy_crd(policy, project_id, sg.id, i_rules,
e_rules)
except exceptions.K8sClientException:
LOG.exception("Rolling back security groups")
# Same with CRD creation
self.os_net.delete_security_group(sg.id)
raise
return sg.id
try:
crd = self.get_kuryrnetpolicy_crd(policy)
self.kubernetes.annotate(policy['metadata']['selfLink'],
{"kuryrnetpolicy_selfLink":
crd['metadata']['selfLink']})
except exceptions.K8sClientException:
LOG.exception('Error annotating network policy')
raise
def _get_pods(self, pod_selector, namespace=None,
namespace_selector=None):
def _get_pods(self, pod_selector, namespace=None, namespace_selector=None):
matching_pods = {"items": []}
if namespace_selector:
matching_namespaces = driver_utils.get_namespaces(
@ -232,7 +198,6 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if not namespace_selector and namespace:
matching_namespaces.append(self.kubernetes.get(
'{}/namespaces/{}'.format(constants.K8S_API_BASE, namespace)))
else:
matching_namespaces.extend(driver_utils.get_namespaces(
namespace_selector).get('items'))
@ -285,7 +250,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
def _create_sg_rules_with_container_ports(
self, container_ports, allow_all, resource, matched_pods,
crd_rules, sg_id, direction, port, pod_selector=None,
crd_rules, direction, port, pod_selector=None,
policy_namespace=None):
cidr, ns = self._get_resource_details(resource)
for pod, container_port in container_ports:
@ -308,18 +273,18 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if not allow_all and matched_pods and cidr:
for container_port, pods in matched_pods.items():
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port,
direction, container_port,
protocol=port.get('protocol'),
cidr=cidr, pods=pods)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
if direction == 'egress':
self._create_svc_egress_sg_rule(
sg_id, policy_namespace, crd_rules,
policy_namespace, crd_rules,
resource=resource, port=container_port,
protocol=port.get('protocol'))
def _create_sg_rule_body_on_text_port(self, sg_id, direction, port,
def _create_sg_rule_body_on_text_port(self, direction, port,
resources, crd_rules, pod_selector,
policy_namespace, allow_all=False):
"""Create SG rules when named port is used in the NP rule
@ -352,7 +317,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
for resource in resources:
self._create_sg_rules_with_container_ports(
container_ports, allow_all, resource, matched_pods,
crd_rules, sg_id, direction, port)
crd_rules, direction, port)
elif direction == "egress":
for resource in resources:
# NOTE(maysams) Skipping objects that refers to ipblocks
@ -364,24 +329,24 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
container_ports = driver_utils.get_ports(resource, port)
self._create_sg_rules_with_container_ports(
container_ports, allow_all, resource, matched_pods,
crd_rules, sg_id, direction, port, pod_selector,
crd_rules, direction, port, pod_selector,
policy_namespace)
if allow_all:
container_port = None
for container_port, pods in matched_pods.items():
for ethertype in (constants.IPv4, constants.IPv6):
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port,
direction, container_port,
protocol=port.get('protocol'),
ethertype=ethertype,
pods=pods)
crd_rules.append(sg_rule)
if direction == 'egress':
self._create_svc_egress_sg_rule(
sg_id, policy_namespace, crd_rules,
policy_namespace, crd_rules,
port=container_port, protocol=port.get('protocol'))
def _create_sg_rule_on_number_port(self, allowed_resources, sg_id,
def _create_sg_rule_on_number_port(self, allowed_resources,
direction, port, sg_rule_body_list,
policy_namespace):
for resource in allowed_resources:
@ -393,52 +358,51 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
continue
sg_rule = (
driver_utils.create_security_group_rule_body(
sg_id, direction, port.get('port'),
direction, port.get('port'),
protocol=port.get('protocol'),
cidr=cidr,
namespace=ns))
sg_rule_body_list.append(sg_rule)
if direction == 'egress':
self._create_svc_egress_sg_rule(
sg_id, policy_namespace, sg_rule_body_list,
policy_namespace, sg_rule_body_list,
resource=resource, port=port.get('port'),
protocol=port.get('protocol'))
def _create_all_pods_sg_rules(self, port, sg_id, direction,
def _create_all_pods_sg_rules(self, port, direction,
sg_rule_body_list, pod_selector,
policy_namespace):
if type(port.get('port')) is not int:
all_pods = driver_utils.get_namespaced_pods().get('items')
self._create_sg_rule_body_on_text_port(
sg_id, direction, port, all_pods,
direction, port, all_pods,
sg_rule_body_list, pod_selector, policy_namespace,
allow_all=True)
else:
for ethertype in (constants.IPv4, constants.IPv6):
sg_rule = (
driver_utils.create_security_group_rule_body(
sg_id, direction, port.get('port'),
direction, port.get('port'),
ethertype=ethertype,
protocol=port.get('protocol')))
sg_rule_body_list.append(sg_rule)
if direction == 'egress':
self._create_svc_egress_sg_rule(
sg_id, policy_namespace, sg_rule_body_list,
policy_namespace, sg_rule_body_list,
port=port.get('port'),
protocol=port.get('protocol'))
def _create_default_sg_rule(self, sg_id, direction, sg_rule_body_list):
def _create_default_sg_rule(self, direction, sg_rule_body_list):
for ethertype in (constants.IPv4, constants.IPv6):
default_rule = {
'security_group_rule': {
'sgRule': {
'ethertype': ethertype,
'security_group_id': sg_id,
'direction': direction,
'description': 'Kuryr-Kubernetes NetPolicy SG rule',
}}
sg_rule_body_list.append(default_rule)
def _parse_sg_rules(self, sg_rule_body_list, direction, policy, sg_id):
def _parse_sg_rules(self, sg_rule_body_list, direction, policy):
"""Parse policy into security group rules.
This method inspects the policy object and create the equivalent
@ -460,16 +424,14 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
# traffic as NP policy is not affecting ingress
LOG.debug('Applying default all open for ingress for '
'policy %s', policy['metadata']['selfLink'])
self._create_default_sg_rule(
sg_id, direction, sg_rule_body_list)
self._create_default_sg_rule(direction, sg_rule_body_list)
elif direction == 'egress':
if policy_types and 'Egress' not in policy_types:
# NOTE(ltomasbo): add default rule to enable all egress
# traffic as NP policy is not affecting egress
LOG.debug('Applying default all open for egress for '
'policy %s', policy['metadata']['selfLink'])
self._create_default_sg_rule(
sg_id, direction, sg_rule_body_list)
self._create_default_sg_rule(direction, sg_rule_body_list)
else:
LOG.warning('Not supported policyType at network policy %s',
policy['metadata']['selfLink'])
@ -487,7 +449,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
policy['metadata']['selfLink'])
for ethertype in (constants.IPv4, constants.IPv6):
rule = driver_utils.create_security_group_rule_body(
sg_id, direction, ethertype=ethertype)
direction, ethertype=ethertype)
sg_rule_body_list.append(rule)
for rule_block in rule_list:
@ -519,20 +481,20 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if allowed_resources or allow_all or selectors:
if type(port.get('port')) is not int:
self._create_sg_rule_body_on_text_port(
sg_id, direction, port, allowed_resources,
direction, port, allowed_resources,
sg_rule_body_list, pod_selector,
policy_namespace)
else:
self._create_sg_rule_on_number_port(
allowed_resources, sg_id, direction, port,
allowed_resources, direction, port,
sg_rule_body_list, policy_namespace)
if allow_all:
self._create_all_pods_sg_rules(
port, sg_id, direction, sg_rule_body_list,
port, direction, sg_rule_body_list,
pod_selector, policy_namespace)
else:
self._create_all_pods_sg_rules(
port, sg_id, direction, sg_rule_body_list,
port, direction, sg_rule_body_list,
pod_selector, policy_namespace)
elif allowed_resources or allow_all or selectors:
for resource in allowed_resources:
@ -543,27 +505,27 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if not cidr:
continue
rule = driver_utils.create_security_group_rule_body(
sg_id, direction,
direction,
port_range_min=1,
port_range_max=65535,
cidr=cidr,
namespace=namespace)
sg_rule_body_list.append(rule)
if direction == 'egress':
rule = self._create_svc_egress_sg_rule(
sg_id, policy_namespace, sg_rule_body_list,
self._create_svc_egress_sg_rule(
policy_namespace, sg_rule_body_list,
resource=resource)
if allow_all:
for ethertype in (constants.IPv4, constants.IPv6):
rule = driver_utils.create_security_group_rule_body(
sg_id, direction,
direction,
port_range_min=1,
port_range_max=65535,
ethertype=ethertype)
sg_rule_body_list.append(rule)
if direction == 'egress':
self._create_svc_egress_sg_rule(
sg_id, policy_namespace, sg_rule_body_list)
self._create_svc_egress_sg_rule(policy_namespace,
sg_rule_body_list)
else:
LOG.debug('This network policy specifies no %(direction)s '
'%(rule_direction)s and no ports: %(policy)s',
@ -571,15 +533,14 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
'rule_direction': rule_direction,
'policy': policy['metadata']['selfLink']})
def _create_svc_egress_sg_rule(self, sg_id, policy_namespace,
sg_rule_body_list, resource=None,
port=None, protocol=None):
def _create_svc_egress_sg_rule(self, policy_namespace, sg_rule_body_list,
resource=None, port=None, protocol=None):
services = driver_utils.get_services()
if not resource:
svc_subnet = utils.get_subnet_cidr(
CONF.neutron_defaults.service_subnet)
rule = driver_utils.create_security_group_rule_body(
sg_id, 'egress', port, protocol=protocol, cidr=svc_subnet)
'egress', port, protocol=protocol, cidr=svc_subnet)
if rule not in sg_rule_body_list:
sg_rule_body_list.append(rule)
return
@ -613,7 +574,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if not cluster_ip:
continue
rule = driver_utils.create_security_group_rule_body(
sg_id, 'egress', port, protocol=protocol,
'egress', port, protocol=protocol,
cidr=cluster_ip)
if rule not in sg_rule_body_list:
sg_rule_body_list.append(rule)
@ -626,7 +587,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
return True
return False
def parse_network_policy_rules(self, policy, sg_id):
def parse_network_policy_rules(self, policy):
"""Create security group rule bodies out of network policies.
Whenever a notification from the handler 'on-present' method is
@ -637,10 +598,8 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
ingress_sg_rule_body_list = []
egress_sg_rule_body_list = []
self._parse_sg_rules(ingress_sg_rule_body_list, 'ingress', policy,
sg_id)
self._parse_sg_rules(egress_sg_rule_body_list, 'egress', policy,
sg_id)
self._parse_sg_rules(ingress_sg_rule_body_list, 'ingress', policy)
self._parse_sg_rules(egress_sg_rule_body_list, 'egress', policy)
return ingress_sg_rule_body_list, egress_sg_rule_body_list
@ -657,19 +616,15 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
LOG.exception("Error deleting security group %s.", sg_id)
raise
def release_network_policy(self, netpolicy_crd):
if netpolicy_crd is not None:
self.delete_np_sg(netpolicy_crd['spec']['securityGroupId'])
self._del_kuryrnetpolicy_crd(
netpolicy_crd['metadata']['name'],
netpolicy_crd['metadata']['namespace'])
def release_network_policy(self, policy):
return self._del_knp_crd(policy)
def get_kuryrnetpolicy_crd(self, policy):
netpolicy_crd_name = "np-" + policy['metadata']['name']
def _get_knp_crd(self, policy):
netpolicy_crd_name = policy['metadata']['name']
netpolicy_crd_namespace = policy['metadata']['namespace']
try:
netpolicy_crd = self.kubernetes.get(
'{}/{}/kuryrnetpolicies/{}'.format(
'{}/{}/kuryrnetworkpolicies/{}'.format(
constants.K8S_API_CRD_NAMESPACES, netpolicy_crd_namespace,
netpolicy_crd_name))
except exceptions.K8sResourceNotFound:
@ -679,77 +634,81 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
raise
return netpolicy_crd
def knps_on_namespace(self, namespace):
try:
netpolicy_crds = self.kubernetes.get(
'{}/{}/kuryrnetpolicies'.format(
constants.K8S_API_CRD_NAMESPACES,
namespace))
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception.")
raise
if netpolicy_crds.get('items'):
return True
return False
def _add_kuryrnetpolicy_crd(self, policy, project_id, sg_id, i_rules,
e_rules):
def _create_knp_crd(self, policy, i_rules, e_rules):
networkpolicy_name = policy['metadata']['name']
netpolicy_crd_name = "np-" + networkpolicy_name
namespace = policy['metadata']['namespace']
pod_selector = policy['spec'].get('podSelector')
policy_types = policy['spec'].get('policyTypes', [])
netpolicy_crd = {
'apiVersion': 'openstack.org/v1',
'kind': constants.K8S_OBJ_KURYRNETPOLICY,
'kind': constants.K8S_OBJ_KURYRNETWORKPOLICY,
'metadata': {
'name': netpolicy_crd_name,
'name': networkpolicy_name,
'namespace': namespace,
'annotations': {
'networkpolicy_name': networkpolicy_name,
'networkpolicy_namespace': namespace,
'networkpolicy_uid': policy['metadata']['uid'],
'networkPolicyLink': policy['metadata']['selfLink'],
},
'finalizers': [constants.NETWORKPOLICY_FINALIZER],
},
'spec': {
'securityGroupName': "sg-" + networkpolicy_name,
'securityGroupId': sg_id,
'ingressSgRules': i_rules,
'egressSgRules': e_rules,
'podSelector': pod_selector,
'networkpolicy_spec': policy['spec']
'policyTypes': policy_types,
},
'status': {
'securityGroupRules': [],
},
}
try:
LOG.debug("Creating KuryrNetPolicy CRD %s" % netpolicy_crd)
kubernetes_post = '{}/{}/kuryrnetpolicies'.format(
LOG.debug("Creating KuryrNetworkPolicy CRD %s" % netpolicy_crd)
url = '{}/{}/kuryrnetworkpolicies'.format(
constants.K8S_API_CRD_NAMESPACES,
namespace)
self.kubernetes.post(kubernetes_post, netpolicy_crd)
netpolicy_crd = self.kubernetes.post(url, netpolicy_crd)
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception creating kuryrnetpolicy"
" CRD. %s" % exceptions.K8sClientException)
LOG.exception("Kubernetes Client Exception creating "
"KuryrNetworkPolicy CRD.")
raise
return netpolicy_crd
def _del_kuryrnetpolicy_crd(self, netpolicy_crd_name,
netpolicy_crd_namespace):
def _patch_knp_crd(self, policy, i_rules, e_rules, knp):
networkpolicy_name = policy['metadata']['name']
namespace = policy['metadata']['namespace']
pod_selector = policy['spec'].get('podSelector')
url = (f'{constants.K8S_API_CRD_NAMESPACES}/{namespace}'
f'/kuryrnetworkpolicies/{networkpolicy_name}')
# FIXME(dulek): Rules should be hashable objects, not dict so that
# we could compare them easily here.
data = {
'ingressSgRules': i_rules,
'egressSgRules': e_rules,
}
if knp['spec'].get('podSelector') != pod_selector:
data['podSelector'] = pod_selector
self.kubernetes.patch_crd('spec', url, data)
def _del_knp_crd(self, policy):
try:
LOG.debug("Deleting KuryrNetPolicy CRD %s" % netpolicy_crd_name)
self.kubernetes.delete('{}/{}/kuryrnetpolicies/{}'.format(
constants.K8S_API_CRD_NAMESPACES,
netpolicy_crd_namespace,
netpolicy_crd_name))
ns = policy['metadata']['namespace']
name = policy['metadata']['name']
LOG.debug("Deleting KuryrNetworkPolicy CRD %s" % name)
self.kubernetes.delete('{}/{}/kuryrnetworkpolicies/{}'.format(
constants.K8S_API_CRD_NAMESPACES, ns, name))
return True
except exceptions.K8sResourceNotFound:
LOG.debug("KuryrNetPolicy CRD Object not found: %s",
netpolicy_crd_name)
LOG.debug("KuryrNetworkPolicy CRD Object not found: %s", name)
return False
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception deleting kuryrnetpolicy"
" CRD.")
LOG.exception("Kubernetes Client Exception deleting "
"KuryrNetworkPolicy CRD %s." % name)
raise
def affected_pods(self, policy, selector=None):
if selector or selector == {}:
if selector is not None:
pod_selector = selector
else:
pod_selector = policy['spec'].get('podSelector')

View File

@ -12,6 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
from oslo_config import cfg
from oslo_log import log as logging
@ -21,6 +23,7 @@ from kuryr_kubernetes import constants
from kuryr_kubernetes.controller.drivers import base
from kuryr_kubernetes.controller.drivers import utils as driver_utils
from kuryr_kubernetes import exceptions
from kuryr_kubernetes import utils
LOG = logging.getLogger(__name__)
@ -29,9 +32,7 @@ def _get_namespace_labels(namespace):
kubernetes = clients.get_kubernetes_client()
try:
path = '{}/{}'.format(
constants.K8S_API_NAMESPACES, namespace)
LOG.debug("K8s API Query %s", path)
path = '{}/{}'.format(constants.K8S_API_NAMESPACES, namespace)
namespaces = kubernetes.get(path)
LOG.debug("Return Namespace: %s", namespaces)
except exceptions.K8sResourceNotFound:
@ -43,107 +44,41 @@ def _get_namespace_labels(namespace):
return namespaces['metadata'].get('labels')
def _create_sg_rule(sg_id, direction, cidr, port=None, namespace=None):
if port:
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, port.get('port'),
protocol=port.get('protocol'), cidr=cidr, namespace=namespace)
else:
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, port_range_min=1,
port_range_max=65535, cidr=cidr, namespace=namespace)
def _bump_networkpolicy(knp):
kubernetes = clients.get_kubernetes_client()
sgr_id = driver_utils.create_security_group_rule(sg_rule)
sg_rule['security_group_rule']['id'] = sgr_id
return sg_rule
try:
kubernetes.annotate(
knp['metadata']['annotations']['networkPolicyLink'],
{constants.K8S_ANNOTATION_POLICY: str(uuid.uuid4())})
except exceptions.K8sResourceNotFound:
LOG.exception("NetworkPolicy not found")
raise
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception")
raise
def _get_crd_rule(crd_rules, container_port):
"""Returns a CRD rule that matches a container port
def _create_sg_rules_with_container_ports(container_ports, matched):
"""Checks if security group rules based on container ports will be updated
Retrieves the CRD rule that contains a given port in
the range of the rule ports.
"""
for crd_rule in crd_rules:
remote_ip_prefixes = crd_rule.get('remote_ip_prefixes')
min_port = crd_rule['security_group_rule'].get('port_range_min')
max_port = crd_rule['security_group_rule'].get('port_range_max')
if (remote_ip_prefixes and (
min_port >= container_port and
container_port <= max_port)):
return crd_rule
def _create_sg_rules_with_container_ports(matched_pods, container_ports,
allow_all, namespace, matched,
crd_rules, sg_id, direction,
port, rule_selected_pod):
"""Create security group rules based on container ports
If it's an allow from/to everywhere rule or a rule with a
NamespaceSelector, updates a sg rule that might already exist
and match the named port or creates a new one with the
remote_ip_prefixes field containing the matched pod info.
Otherwise, creates rules for each container port without
a remote_ip_prefixes field.
param matched_pods: List of dicts where the key is a container
port and value is the pods that have the port
param container_ports: List of tuples with pods and port values
param allow_all: True is it's an allow from/to everywhere rule,
False otherwise.
param namespace: Namespace name
param matched: If a sg rule was created for the NP rule
param crd_rules: List of sg rules to update when patching the CRD
param sg_id: ID of the security group
param direction: String representing rule direction, ingress or egress
param port: Dict containing port and protocol
param rule_selected_pod: K8s Pod object selected by the rules selectors
return: True if a sg rule was created, False otherwise.
return: True if a sg rule needs to be created, False otherwise.
"""
for pod, container_port in container_ports:
pod_namespace = pod['metadata']['namespace']
pod_ip = driver_utils.get_pod_ip(pod)
if not pod_ip:
LOG.debug("Skipping SG rule creation for pod %s due to "
"no IP assigned", pod['metadata']['name'])
continue
pod_info = {pod_ip: pod_namespace}
matched = True
if allow_all or namespace:
crd_rule = _get_crd_rule(crd_rules, container_port)
if crd_rule:
crd_rule['remote_ip_prefixes'].update(pod_info)
else:
if container_port in matched_pods:
matched_pods[container_port].update(pod_info)
else:
matched_pods[container_port] = pod_info
else:
pod_ip = driver_utils.get_pod_ip(rule_selected_pod)
if not pod_ip:
LOG.debug("Skipping SG rule creation for pod %s due to no IP "
"assigned", rule_selected_pod['metadata']['name'])
continue
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port,
protocol=port.get('protocol'),
cidr=pod_ip, pods=pod_info)
sgr_id = driver_utils.create_security_group_rule(sg_rule)
sg_rule['security_group_rule']['id'] = sgr_id
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
return matched
return matched
return False
def _create_sg_rule_on_text_port(sg_id, direction, port, rule_selected_pods,
crd_rules, matched, crd,
allow_all=False, namespace=None):
matched_pods = {}
def _create_sg_rule_on_text_port(direction, port, rule_selected_pods, matched,
crd):
spec_pod_selector = crd['spec'].get('podSelector')
policy_namespace = crd['metadata']['namespace']
spec_pods = driver_utils.get_pods(
@ -151,11 +86,8 @@ def _create_sg_rule_on_text_port(sg_id, direction, port, rule_selected_pods,
if direction == 'ingress':
for spec_pod in spec_pods:
container_ports = driver_utils.get_ports(spec_pod, port)
for rule_selected_pod in rule_selected_pods:
matched = _create_sg_rules_with_container_ports(
matched_pods, container_ports, allow_all, namespace,
matched, crd_rules, sg_id, direction, port,
rule_selected_pod)
matched = _create_sg_rules_with_container_ports(
container_ports, matched)
elif direction == 'egress':
for rule_selected_pod in rule_selected_pods:
pod_label = rule_selected_pod['metadata'].get('labels')
@ -168,51 +100,11 @@ def _create_sg_rule_on_text_port(sg_id, direction, port, rule_selected_pods,
container_ports = driver_utils.get_ports(
rule_selected_pod, port)
matched = _create_sg_rules_with_container_ports(
matched_pods, container_ports, allow_all,
namespace, matched, crd_rules, sg_id, direction,
port, rule_selected_pod)
_apply_sg_rules_on_matched_pods(matched_pods, sg_id, direction, namespace,
port, crd_rules, allow_all)
container_ports, matched)
return matched
def _apply_sg_rules_on_matched_pods(matched_pods, sg_id, direction, namespace,
port, crd_rules, allow_all=False):
for container_port, pods in matched_pods.items():
if allow_all:
for ethertype in (constants.IPv4, constants.IPv6):
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port,
protocol=port.get('protocol'),
ethertype=ethertype,
pods=pods)
sgr_id = driver_utils.create_security_group_rule(sg_rule)
sg_rule['security_group_rule']['id'] = sgr_id
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else:
namespace_obj = driver_utils.get_namespace(namespace)
if not namespace_obj:
LOG.debug("Skipping SG rule creation. Inexistent"
" namespace.")
continue
namespace_cidr = driver_utils.get_namespace_subnet_cidr(
namespace_obj)
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port,
protocol=port.get('protocol'), cidr=namespace_cidr,
pods=pods)
sgr_id = driver_utils.create_security_group_rule(sg_rule)
sg_rule['security_group_rule']['id'] = sgr_id
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
def _create_sg_rules(crd, pod, pod_selector, rule_block,
crd_rules, direction, matched, namespace=None,
allow_all=False):
def _create_sg_rules(crd, pod, pod_selector, rule_block, direction, matched):
pod_labels = pod['metadata'].get('labels')
pod_ip = driver_utils.get_pod_ip(pod)
if not pod_ip:
@ -224,73 +116,52 @@ def _create_sg_rules(crd, pod, pod_selector, rule_block,
# with empty value or with '{}', as they have same result in here.
if pod_selector:
if driver_utils.match_selector(pod_selector, pod_labels):
sg_id = crd['spec']['securityGroupId']
if 'ports' in rule_block:
for port in rule_block['ports']:
if type(port.get('port')) is not int:
matched = _create_sg_rule_on_text_port(
sg_id, direction, port, [pod],
crd_rules, matched, crd)
direction, port, [pod], matched, crd)
else:
matched = True
sg_rule = _create_sg_rule(
sg_id, direction, cidr=pod_ip, port=port,
namespace=namespace)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else:
matched = True
sg_rule = _create_sg_rule(
sg_id, direction, cidr=pod_ip, namespace=namespace)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else:
# NOTE (maysams) When a policy with namespaceSelector and text port
# is applied the port on the pods needs to be retrieved.
sg_id = crd['spec']['securityGroupId']
if 'ports' in rule_block:
for port in rule_block['ports']:
if type(port.get('port')) is not int:
matched = (
_create_sg_rule_on_text_port(
sg_id, direction, port, [pod],
crd_rules, matched, crd,
allow_all=allow_all, namespace=namespace))
matched = _create_sg_rule_on_text_port(
direction, port, [pod], matched, crd)
return matched
def _parse_selectors_on_pod(crd, pod, pod_selector, namespace_selector,
rule_block, crd_rules, direction, matched):
rule_block, direction, matched):
pod_namespace = pod['metadata']['namespace']
pod_namespace_labels = _get_namespace_labels(pod_namespace)
policy_namespace = crd['metadata']['namespace']
if namespace_selector == {}:
matched = _create_sg_rules(crd, pod, pod_selector, rule_block,
crd_rules, direction, matched,
allow_all=True)
direction, matched)
elif namespace_selector:
if (pod_namespace_labels and
driver_utils.match_selector(namespace_selector,
pod_namespace_labels)):
matched = _create_sg_rules(crd, pod, pod_selector,
rule_block, crd_rules,
direction, matched,
namespace=pod_namespace)
rule_block, direction, matched)
else:
if pod_namespace == policy_namespace:
matched = _create_sg_rules(crd, pod, pod_selector, rule_block,
crd_rules, direction, matched,
namespace=pod_namespace)
return matched, crd_rules
direction, matched)
return matched
def _parse_selectors_on_namespace(crd, direction, pod_selector,
ns_selector, rule_block, crd_rules,
namespace, matched):
ns_selector, rule_block, namespace, matched):
ns_name = namespace['metadata'].get('name')
ns_labels = namespace['metadata'].get('labels')
sg_id = crd['spec']['securityGroupId']
if (ns_selector and ns_labels and
driver_utils.match_selector(ns_selector, ns_labels)):
@ -301,10 +172,8 @@ def _parse_selectors_on_namespace(crd, direction, pod_selector,
if type(port.get('port')) is not int:
matched = (
_create_sg_rule_on_text_port(
sg_id, direction, port, pods,
crd_rules, matched, crd))
direction, port, pods, matched, crd))
else:
matched = True
for pod in pods:
pod_ip = driver_utils.get_pod_ip(pod)
if not pod_ip:
@ -312,11 +181,7 @@ def _parse_selectors_on_namespace(crd, direction, pod_selector,
LOG.debug("Skipping SG rule creation for pod "
"%s due to no IP assigned", pod_name)
continue
sg_rule = _create_sg_rule(
sg_id, direction, pod_ip, port=port,
namespace=ns_name)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
matched = True
else:
for pod in pods:
pod_ip = driver_utils.get_pod_ip(pod)
@ -326,45 +191,25 @@ def _parse_selectors_on_namespace(crd, direction, pod_selector,
" to no IP assigned", pod_name)
continue
matched = True
sg_rule = _create_sg_rule(
sg_id, direction, pod_ip,
namespace=ns_name)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else:
ns_pods = driver_utils.get_pods(ns_selector)['items']
ns_cidr = driver_utils.get_namespace_subnet_cidr(namespace)
if 'ports' in rule_block:
for port in rule_block['ports']:
if type(port.get('port')) is not int:
matched = (
_create_sg_rule_on_text_port(
sg_id, direction, port, ns_pods,
crd_rules, matched, crd))
direction, port, ns_pods, matched, crd))
else:
matched = True
sg_rule = _create_sg_rule(
sg_id, direction, ns_cidr,
port=port, namespace=ns_name)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else:
matched = True
sg_rule = _create_sg_rule(
sg_id, direction, ns_cidr,
namespace=ns_name)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
return matched, crd_rules
return matched
def _parse_rules(direction, crd, pod=None, namespace=None):
policy = crd['spec']['networkpolicy_spec']
def _parse_rules(direction, crd, policy, pod=None, namespace=None):
rule_direction = 'from'
crd_rules = crd['spec'].get('ingressSgRules')
if direction == 'egress':
rule_direction = 'to'
crd_rules = crd['spec'].get('egressSgRules')
matched = False
rule_list = policy.get(direction, [])
@ -373,13 +218,13 @@ def _parse_rules(direction, crd, pod=None, namespace=None):
namespace_selector = rule.get('namespaceSelector')
pod_selector = rule.get('podSelector')
if pod:
matched, crd_rules = _parse_selectors_on_pod(
matched = _parse_selectors_on_pod(
crd, pod, pod_selector, namespace_selector,
rule_block, crd_rules, direction, matched)
rule_block, direction, matched)
elif namespace:
matched, crd_rules = _parse_selectors_on_namespace(
matched = _parse_selectors_on_namespace(
crd, direction, pod_selector, namespace_selector,
rule_block, crd_rules, namespace, matched)
rule_block, namespace, matched)
# NOTE(maysams): Cover the case of a network policy that allows
# from everywhere on a named port, e.g., when there is no 'from'
@ -387,84 +232,62 @@ def _parse_rules(direction, crd, pod=None, namespace=None):
if pod and not matched:
for port in rule_block.get('ports', []):
if type(port.get('port')) is not int:
sg_id = crd['spec']['securityGroupId']
if (not rule_block.get(rule_direction, [])
or direction == "ingress"):
matched = (_create_sg_rule_on_text_port(
sg_id, direction, port, [pod],
crd_rules, matched, crd,
allow_all=True))
return matched, crd_rules
matched = _create_sg_rule_on_text_port(
direction, port, [pod], matched, crd)
return matched
def _parse_rules_on_delete_namespace(rule_list, direction, ns_name):
matched = False
rules = []
for rule in rule_list:
LOG.debug('Parsing %(dir)s Rule %(r)s', {'dir': direction,
'r': rule})
LOG.debug('Parsing %(dir)s Rule %(r)s', {'dir': direction, 'r': rule})
rule_namespace = rule.get('namespace', None)
remote_ip_prefixes = rule.get('remote_ip_prefixes', {})
affectedPods = rule.get('affectedPods', [])
if rule_namespace and rule_namespace == ns_name:
matched = True
driver_utils.delete_security_group_rule(
rule['security_group_rule']['id'])
elif remote_ip_prefixes:
for remote_ip, namespace in list(remote_ip_prefixes.items()):
if namespace == ns_name:
matched = True
remote_ip_prefixes.pop(remote_ip)
if remote_ip_prefixes:
rule['remote_ip_prefixes'] = remote_ip_prefixes
rules.append(rule)
else:
rules.append(rule)
return matched, rules
return True
elif affectedPods:
for pod_info in affectedPods:
if pod_info['podNamespace'] == ns_name:
return True
return False
def _parse_rules_on_delete_pod(rule_list, direction, pod_ip):
matched = False
rules = []
for rule in rule_list:
LOG.debug('Parsing %(dir)s Rule %(r)s', {'dir': direction,
'r': rule})
remote_ip_prefix = rule['security_group_rule'].get(
'remote_ip_prefix')
remote_ip_prefixes = rule.get('remote_ip_prefixes', {})
LOG.debug('Parsing %(dir)s Rule %(r)s', {'dir': direction, 'r': rule})
remote_ip_prefix = rule['sgRule'].get('remote_ip_prefix')
affectedPods = rule.get('affectedPods', [])
if remote_ip_prefix and remote_ip_prefix == pod_ip:
matched = True
driver_utils.delete_security_group_rule(
rule['security_group_rule']['id'])
elif remote_ip_prefixes:
if pod_ip in remote_ip_prefixes:
matched = True
remote_ip_prefixes.pop(pod_ip)
if remote_ip_prefixes:
rule['remote_ip_prefixes'] = remote_ip_prefixes
rules.append(rule)
else:
rules.append(rule)
return matched, rules
return True
elif affectedPods:
for pod_info in affectedPods:
if pod_info['podIP'] == pod_ip:
return True
return False
def _get_pod_sgs(pod, project_id):
def _get_pod_sgs(pod):
sg_list = []
pod_labels = pod['metadata'].get('labels')
pod_namespace = pod['metadata']['namespace']
knp_crds = driver_utils.get_kuryrnetpolicy_crds(
knp_crds = driver_utils.get_kuryrnetworkpolicy_crds(
namespace=pod_namespace)
for crd in knp_crds.get('items'):
for crd in knp_crds:
pod_selector = crd['spec'].get('podSelector')
if pod_selector:
if driver_utils.match_selector(pod_selector, pod_labels):
LOG.debug("Appending %s",
str(crd['spec']['securityGroupId']))
sg_list.append(str(crd['spec']['securityGroupId']))
else:
LOG.debug("Appending %s", str(crd['spec']['securityGroupId']))
sg_list.append(str(crd['spec']['securityGroupId']))
if driver_utils.match_selector(pod_selector, pod_labels):
sg_id = crd['status'].get('securityGroupId')
if not sg_id:
# NOTE(dulek): We could just assume KNP handler will apply it,
# but it's possible that when it gets this pod it
# will have no IP yet and will be skipped.
LOG.warning('SG for NP %s not created yet, will retry.',
utils.get_res_unique_name(crd))
raise exceptions.ResourceNotReady(pod)
LOG.debug("Appending %s", crd['status']['securityGroupId'])
sg_list.append(crd['status']['securityGroupId'])
# NOTE(maysams) Pods that are not selected by any Networkpolicy
# are fully accessible. Thus, the default security group is associated.
@ -481,55 +304,56 @@ class NetworkPolicySecurityGroupsDriver(base.PodSecurityGroupsDriver):
"""Provides security groups for pods based on network policies"""
def get_security_groups(self, pod, project_id):
return _get_pod_sgs(pod, project_id)
return _get_pod_sgs(pod)
def create_sg_rules(self, pod):
LOG.debug("Creating sg rule for pod: %s", pod['metadata']['name'])
LOG.debug("Creating SG rules for pod: %s", pod['metadata']['name'])
crd_pod_selectors = []
knp_crds = driver_utils.get_kuryrnetpolicy_crds()
for crd in knp_crds.get('items'):
crd_selector = crd['spec'].get('podSelector')
knp_crds = driver_utils.get_kuryrnetworkpolicy_crds()
nps = driver_utils.get_networkpolicies()
pairs = driver_utils.zip_knp_np(knp_crds, nps)
i_matched, i_rules = _parse_rules('ingress', crd, pod=pod)
e_matched, e_rules = _parse_rules('egress', crd, pod=pod)
for crd, policy in pairs:
crd_selector = crd['spec'].get('podSelector')
spec = policy.get('spec')
i_matched = _parse_rules('ingress', crd, spec, pod=pod)
e_matched = _parse_rules('egress', crd, spec, pod=pod)
if i_matched or e_matched:
driver_utils.patch_kuryrnetworkpolicy_crd(crd, i_rules,
e_rules,
crd_selector)
_bump_networkpolicy(crd)
if i_matched:
crd_pod_selectors.append(crd_selector)
return crd_pod_selectors
def delete_sg_rules(self, pod):
LOG.debug("Deleting sg rule for pod: %s", pod['metadata']['name'])
LOG.debug("Deleting SG rules for pod: %s", pod['metadata']['name'])
pod_ip = driver_utils.get_pod_ip(pod)
crd_pod_selectors = []
if not pod_ip:
LOG.debug("Skipping SG rule deletion as pod %s has no IP assigned",
pod['metadata']['name'])
return None
crd_pod_selectors = []
knp_crds = driver_utils.get_kuryrnetpolicy_crds()
for crd in knp_crds.get('items'):
return crd_pod_selectors
knp_crds = driver_utils.get_kuryrnetworkpolicy_crds()
for crd in knp_crds:
crd_selector = crd['spec'].get('podSelector')
ingress_rule_list = crd['spec'].get('ingressSgRules')
egress_rule_list = crd['spec'].get('egressSgRules')
i_matched, i_rules = _parse_rules_on_delete_pod(
i_matched = _parse_rules_on_delete_pod(
ingress_rule_list, "ingress", pod_ip)
e_matched, e_rules = _parse_rules_on_delete_pod(
e_matched = _parse_rules_on_delete_pod(
egress_rule_list, "egress", pod_ip)
if i_matched or e_matched:
driver_utils.patch_kuryrnetworkpolicy_crd(crd, i_rules,
e_rules,
crd_selector)
_bump_networkpolicy(crd)
if i_matched:
crd_pod_selectors.append(crd_selector)
return crd_pod_selectors
def update_sg_rules(self, pod):
LOG.debug("Updating sg rule for pod: %s", pod['metadata']['name'])
LOG.debug("Updating SG rules for pod: %s", pod['metadata']['name'])
# FIXME(dulek): No need to bump twice.
crd_pod_selectors = []
crd_pod_selectors.extend(self.delete_sg_rules(pod))
crd_pod_selectors.extend(self.create_sg_rules(pod))
@ -537,51 +361,47 @@ class NetworkPolicySecurityGroupsDriver(base.PodSecurityGroupsDriver):
def delete_namespace_sg_rules(self, namespace):
ns_name = namespace['metadata']['name']
LOG.debug("Deleting sg rule for namespace: %s",
ns_name)
LOG.debug("Deleting SG rules for namespace: %s", ns_name)
crd_selectors = []
knp_crds = driver_utils.get_kuryrnetpolicy_crds()
for crd in knp_crds.get('items'):
knp_crds = driver_utils.get_kuryrnetworkpolicy_crds()
for crd in knp_crds:
crd_selector = crd['spec'].get('podSelector')
ingress_rule_list = crd['spec'].get('ingressSgRules')
egress_rule_list = crd['spec'].get('egressSgRules')
i_matched, i_rules = _parse_rules_on_delete_namespace(
i_matched = _parse_rules_on_delete_namespace(
ingress_rule_list, "ingress", ns_name)
e_matched, e_rules = _parse_rules_on_delete_namespace(
e_matched = _parse_rules_on_delete_namespace(
egress_rule_list, "egress", ns_name)
if i_matched or e_matched:
driver_utils.patch_kuryrnetworkpolicy_crd(
crd, i_rules, e_rules, crd_selector)
_bump_networkpolicy(crd)
if i_matched:
crd_selectors.append(crd_selector)
return crd_selectors
def create_namespace_sg_rules(self, namespace):
ns_name = namespace['metadata']['name']
LOG.debug("Creating sg rule for namespace: %s", ns_name)
LOG.debug("Creating SG rules for namespace: %s", ns_name)
crd_selectors = []
knp_crds = driver_utils.get_kuryrnetpolicy_crds()
for crd in knp_crds.get('items'):
knp_crds = driver_utils.get_kuryrnetworkpolicy_crds()
nps = driver_utils.get_networkpolicies()
pairs = driver_utils.zip_knp_np(knp_crds, nps)
for crd, policy in pairs:
crd_selector = crd['spec'].get('podSelector')
i_matched, i_rules = _parse_rules(
'ingress', crd, namespace=namespace)
e_matched, e_rules = _parse_rules(
'egress', crd, namespace=namespace)
spec = policy.get('spec')
i_matched = _parse_rules('ingress', crd, spec, namespace=namespace)
e_matched = _parse_rules('egress', crd, spec, namespace=namespace)
if i_matched or e_matched:
driver_utils.patch_kuryrnetworkpolicy_crd(crd, i_rules,
e_rules,
crd_selector)
_bump_networkpolicy(crd)
if i_matched:
crd_selectors.append(crd_selector)
return crd_selectors
def update_namespace_sg_rules(self, namespace):
LOG.debug("Updating sg rule for namespace: %s",
LOG.debug("Updating SG rules for namespace: %s",
namespace['metadata']['name'])
crd_selectors = []
crd_selectors.extend(self.delete_namespace_sg_rules(namespace))
@ -608,5 +428,5 @@ class NetworkPolicyServiceSecurityGroupsDriver(
# all of them. Hence only considering the security groups applied
# to the first one.
if pods:
return _get_pod_sgs(pods[0], project_id)
return _get_pod_sgs(pods[0])
return sg_list[:]

View File

@ -25,6 +25,7 @@ from oslo_serialization import jsonutils
from kuryr_kubernetes import clients
from kuryr_kubernetes import constants
from kuryr_kubernetes import exceptions as k_exc
from kuryr_kubernetes import utils
OPERATORS_WITH_VALUES = [constants.K8S_OPERATOR_IN,
@ -182,10 +183,8 @@ def replace_encoded_characters(labels):
def create_security_group_rule(body):
os_net = clients.get_network_client()
sgr = ''
try:
params = dict(body['security_group_rule'])
params = dict(body)
if 'ethertype' in params:
# NOTE(gryf): in openstacksdk, there is ether_type attribute in
# the security_group_rule object, in CRD we have 'ethertype'
@ -220,29 +219,27 @@ def delete_security_group_rule(security_group_rule_id):
raise
def patch_kuryrnetworkpolicy_crd(crd, i_rules, e_rules, pod_selector,
np_spec=None):
def patch_kuryrnetworkpolicy_crd(crd, i_rules, e_rules):
kubernetes = clients.get_kubernetes_client()
crd_name = crd['metadata']['name']
if not np_spec:
np_spec = crd['spec']['networkpolicy_spec']
LOG.debug('Patching KuryrNetPolicy CRD %s' % crd_name)
LOG.debug('Patching KuryrNetworkPolicy CRD %s' % crd_name)
try:
kubernetes.patch_crd('spec', crd['metadata']['selfLink'],
{'ingressSgRules': i_rules,
'egressSgRules': e_rules,
'podSelector': pod_selector,
'networkpolicy_spec': np_spec})
spec = {
'ingressSgRules': i_rules,
'egressSgRules': e_rules,
}
kubernetes.patch_crd('spec', crd['metadata']['selfLink'], spec)
except k_exc.K8sResourceNotFound:
LOG.debug('KuryrNetPolicy CRD not found %s', crd_name)
LOG.debug('KuryrNetworkPolicy CRD not found %s', crd_name)
except k_exc.K8sClientException:
LOG.exception('Error updating kuryrnetpolicy CRD %s', crd_name)
LOG.exception('Error updating KuryrNetworkPolicy CRD %s', crd_name)
raise
def create_security_group_rule_body(
security_group_id, direction, port_range_min=None,
port_range_max=None, protocol=None, ethertype=None, cidr=None,
direction, port_range_min=None, port_range_max=None, protocol=None,
ethertype='IPv4', cidr=None,
description="Kuryr-Kubernetes NetPolicy SG rule", namespace=None,
pods=None):
if not port_range_min:
@ -253,15 +250,12 @@ def create_security_group_rule_body(
if not protocol:
protocol = 'TCP'
if not ethertype:
ethertype = 'IPv4'
if cidr and netaddr.IPNetwork(cidr).version == 6:
ethertype = 'IPv6'
if cidr and netaddr.IPNetwork(cidr).version == 6:
ethertype = 'IPv6'
security_group_rule_body = {
'security_group_rule': {
'sgRule': {
'ethertype': ethertype,
'security_group_id': security_group_id,
'description': description,
'direction': direction,
'protocol': protocol.lower(),
@ -270,12 +264,12 @@ def create_security_group_rule_body(
}
}
if cidr:
security_group_rule_body['security_group_rule'][
'remote_ip_prefix'] = cidr
security_group_rule_body['sgRule']['remote_ip_prefix'] = cidr
if namespace:
security_group_rule_body['namespace'] = namespace
if pods:
security_group_rule_body['remote_ip_prefixes'] = pods
security_group_rule_body['affectedPods'] = [
{'podIP': ip, 'podNamespace': ns} for ip, ns in pods.items()]
LOG.debug("Creating sg rule body %s", security_group_rule_body)
return security_group_rule_body
@ -310,25 +304,60 @@ def get_annotated_labels(resource, annotation_labels):
return None
def get_kuryrnetpolicy_crds(namespace=None):
def get_kuryrnetworkpolicy_crds(namespace=None):
kubernetes = clients.get_kubernetes_client()
try:
if namespace:
knp_path = '{}/{}/kuryrnetpolicies'.format(
knp_path = '{}/{}/kuryrnetworkpolicies'.format(
constants.K8S_API_CRD_NAMESPACES, namespace)
else:
knp_path = constants.K8S_API_CRD_KURYRNETPOLICIES
LOG.debug("K8s API Query %s", knp_path)
knp_path = constants.K8S_API_CRD_KURYRNETWORKPOLICIES
knps = kubernetes.get(knp_path)
LOG.debug("Return Kuryr Network Policies with label %s", knps)
LOG.debug("Returning KuryrNetworkPolicies %s", knps)
except k_exc.K8sResourceNotFound:
LOG.exception("KuryrNetPolicy CRD not found")
LOG.exception("KuryrNetworkPolicy CRD not found")
raise
except k_exc.K8sClientException:
LOG.exception("Kubernetes Client Exception")
raise
return knps
return knps.get('items', [])
def get_networkpolicies(namespace=None):
# FIXME(dulek): This is awful, shouldn't we have list method on k8s_client?
kubernetes = clients.get_kubernetes_client()
try:
if namespace:
np_path = '{}/{}/networkpolicies'.format(
constants.K8S_API_CRD_NAMESPACES, namespace)
else:
np_path = constants.K8S_API_POLICIES
nps = kubernetes.get(np_path)
except k_exc.K8sResourceNotFound:
LOG.exception("NetworkPolicy or namespace %s not found", namespace)
raise
except k_exc.K8sClientException:
LOG.exception("Exception when listing NetworkPolicies.")
raise
return nps.get('items', [])
def zip_knp_np(knps, nps):
"""Returns tuples of matching KuryrNetworkPolicy and NetworkPolicy objs.
:param knps: List of KuryrNetworkPolicy objects
:param nps: List of NetworkPolicy objects
:return: List of tuples of matching (knp, np)
"""
pairs = []
for knp in knps:
for np in nps:
if utils.get_res_unique_name(knp) == utils.get_res_unique_name(np):
pairs.append((knp, np))
break
return pairs
def match_expressions(expressions, labels):
@ -369,6 +398,8 @@ def match_labels(crd_labels, labels):
def match_selector(selector, labels):
if selector is None:
return True
crd_labels = selector.get('matchLabels', None)
crd_expressions = selector.get('matchExpressions', None)

View File

@ -1,37 +0,0 @@
# Copyright 2019 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from kuryr_kubernetes import constants
from kuryr_kubernetes.controller.drivers import base as drivers
from kuryr_kubernetes.handlers import k8s_base
class KuryrNetPolicyHandler(k8s_base.ResourceEventHandler):
"""Controller side of KuryrNetPolicy process for Kubernetes pods.
`KuryrNetPolicyHandler` runs on the Kuryr-Kubernetes controller and is
responsible for deleting associated security groups upon namespace
deletion.
"""
OBJECT_KIND = constants.K8S_OBJ_KURYRNETPOLICY
OBJECT_WATCH_PATH = constants.K8S_API_CRD_KURYRNETPOLICIES
def __init__(self):
super(KuryrNetPolicyHandler, self).__init__()
self._drv_policy = drivers.NetworkPolicyDriver.get_instance()
def on_deleted(self, netpolicy_crd):
crd_sg = netpolicy_crd['spec'].get('securityGroupId')
if crd_sg:
self._drv_policy.delete_np_sg(crd_sg)

View File

@ -0,0 +1,307 @@
# Copyright 2019 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from openstack import exceptions as os_exc
from oslo_config import cfg
from oslo_log import log as logging
from kuryr_kubernetes import clients
from kuryr_kubernetes import constants
from kuryr_kubernetes.controller.drivers import base as drivers
from kuryr_kubernetes.controller.drivers import utils as driver_utils
from kuryr_kubernetes import exceptions
from kuryr_kubernetes.handlers import k8s_base
from kuryr_kubernetes import utils
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class KuryrNetworkPolicyHandler(k8s_base.ResourceEventHandler):
"""Controller side of KuryrNetworkPolicy process for Kubernetes pods.
`KuryrNetworkPolicyHandler` runs on the kuryr-controller and is
responsible for creating and deleting SG and SG rules for `NetworkPolicy`.
The `KuryrNetworkPolicy` objects are created by `NetworkPolicyHandler`.
"""
OBJECT_KIND = constants.K8S_OBJ_KURYRNETWORKPOLICY
OBJECT_WATCH_PATH = constants.K8S_API_CRD_KURYRNETWORKPOLICIES
def __init__(self):
super(KuryrNetworkPolicyHandler, self).__init__()
self.os_net = clients.get_network_client()
self.k8s = clients.get_kubernetes_client()
self._drv_project = drivers.NetworkPolicyProjectDriver.get_instance()
self._drv_policy = drivers.NetworkPolicyDriver.get_instance()
self._drv_vif_pool = drivers.VIFPoolDriver.get_instance(
specific_driver='multi_pool')
self._drv_vif_pool.set_vif_driver()
self._drv_pod_sg = drivers.PodSecurityGroupsDriver.get_instance()
self._drv_svc_sg = drivers.ServiceSecurityGroupsDriver.get_instance()
self._drv_lbaas = drivers.LBaaSDriver.get_instance()
self._convert_old_crds()
def _convert_old_crds(self):
try:
netpolicies = self.k8s.get(constants.K8S_API_CRD_KURYRNETPOLICIES)
except exceptions.K8sClientException:
LOG.exception("Error when fetching old KuryrNetPolicy CRDs for "
"conversion.")
return
for netpolicy in netpolicies.get('items', []):
new_networkpolicy = self._drv_policy.get_from_old_crd(netpolicy)
url = (f"{constants.K8S_API_CRD_NAMESPACES}/"
f"{netpolicy['metadata']['namespace']}/"
f"kuryrnetworkpolicies")
try:
self.k8s.post(url, new_networkpolicy)
except exceptions.K8sConflict:
LOG.warning('KuryrNetworkPolicy %s already existed when '
'converting KuryrNetPolicy %s. Ignoring.',
utils.get_res_unique_name(new_networkpolicy),
utils.get_res_unique_name(netpolicy))
self.k8s.delete(netpolicy['metadata']['selfLink'])
def _patch_kuryrnetworkpolicy_crd(self, knp, field, data,
action='replace'):
name = knp['metadata']['name']
LOG.debug('Patching KuryrNet CRD %s', name)
try:
status = self.k8s.patch_crd(field, knp['metadata']['selfLink'],
data, action=action)
except exceptions.K8sResourceNotFound:
LOG.debug('KuryrNetworkPolicy CRD not found %s', name)
return None
except exceptions.K8sClientException:
LOG.exception('Error updating KuryrNetworkPolicy CRD %s', name)
raise
knp['status'] = status
return knp
def _get_networkpolicy(self, link):
return self.k8s.get(link)
def _compare_sgs(self, a, b):
checked_props = ('direction', 'ethertype', 'port_range_max',
'port_range_min', 'protocol', 'remote_ip_prefix')
for k in checked_props:
if a.get(k) != b.get(k):
return False
return True
def _find_sgs(self, a, rules):
for r in rules:
if self._compare_sgs(r, a):
return True
return False
def on_present(self, knp):
uniq_name = utils.get_res_unique_name(knp)
LOG.debug('on_present() for NP %s', uniq_name)
project_id = self._drv_project.get_project(knp)
if not knp['status'].get('securityGroupId'):
LOG.debug('Creating SG for NP %s', uniq_name)
# TODO(dulek): Do this right, why do we have a project driver per
# resource?! This one expects policy, not knp, but it
# ignores it anyway!
sg_id = self._drv_policy.create_security_group(knp, project_id)
knp = self._patch_kuryrnetworkpolicy_crd(
knp, 'status', {'securityGroupId': sg_id})
LOG.debug('Created SG %s for NP %s', sg_id, uniq_name)
else:
# TODO(dulek): Check if it really exists, recreate if not.
sg_id = knp['status'].get('securityGroupId')
# First update SG rules as we want to apply updated ones
current = knp['status']['securityGroupRules']
required = knp['spec']['ingressSgRules'] + knp['spec']['egressSgRules']
required = [r['sgRule'] for r in required]
# FIXME(dulek): This *might* be prone to race conditions if failure
# happens between SG rule is created/deleted and status
# is annotated. We don't however need to revert on failed
# K8s operations - creation, deletion of SG rules and
# attaching or detaching SG from ports are idempotent
# so we can repeat them. What worries me is losing track
# of an update due to restart. The only way to do it
# would be to periodically check if what's in `status`
# is the reality in OpenStack API. That should be just
# two Neutron API calls + possible resync.
to_add = []
to_remove = []
for r in required:
if not self._find_sgs(r, current):
to_add.append(r)
for i, c in enumerate(current):
if not self._find_sgs(c, required):
to_remove.append((i, c['id']))
LOG.debug('SGs to add for NP %s: %s', uniq_name, to_add)
for sg_rule in to_add:
LOG.debug('Adding SG rule %s for NP %s', sg_rule, uniq_name)
sg_rule['security_group_id'] = sg_id
sgr_id = driver_utils.create_security_group_rule(sg_rule)
sg_rule['id'] = sgr_id
knp = self._patch_kuryrnetworkpolicy_crd(
knp, 'status', {'securityGroupRules/-': sg_rule}, 'add')
# We need to remove starting from the last one in order to maintain
# indexes. Please note this will start to fail miserably if we start
# to change status from multiple places.
to_remove.reverse()
LOG.debug('SGs to remove for NP %s: %s', uniq_name,
[x[1] for x in to_remove])
for i, sg_rule_id in to_remove:
LOG.debug('Removing SG rule %s as it is no longer part of NP %s',
sg_rule_id, uniq_name)
driver_utils.delete_security_group_rule(sg_rule_id)
knp = self._patch_kuryrnetworkpolicy_crd(
knp, 'status/securityGroupRules', i, 'remove')
pods_to_update = []
previous_sel = knp['status'].get('podSelector', None)
current_sel = knp['spec']['podSelector']
if previous_sel is None:
# Fresh NetworkPolicy that was never applied.
pods_to_update.extend(self._drv_policy.namespaced_pods(knp))
elif previous_sel != current_sel or previous_sel == {}:
pods_to_update.extend(
self._drv_policy.affected_pods(knp, previous_sel))
matched_pods = self._drv_policy.affected_pods(knp)
pods_to_update.extend(matched_pods)
for pod in pods_to_update:
if driver_utils.is_host_network(pod):
continue
pod_sgs = self._drv_pod_sg.get_security_groups(pod, project_id)
self._drv_vif_pool.update_vif_sgs(pod, pod_sgs)
# FIXME(dulek): We should not need this one day.
policy = self._get_networkpolicy(knp['metadata']['annotations']
['networkPolicyLink'])
if (pods_to_update and CONF.octavia_defaults.enforce_sg_rules and
not self._is_egress_only_policy(policy)):
# NOTE(ltomasbo): only need to change services if the pods that
# they point to are updated
services = driver_utils.get_services(knp['metadata']['namespace'])
for service in services.get('items', []):
# TODO(ltomasbo): Skip other services that are not affected
# by the policy
# FIXME(dulek): Make sure to include svcs without selector when
# we start supporting them.
if (not service['spec'].get('selector') or not
self._is_service_affected(service, pods_to_update)):
continue
sgs = self._drv_svc_sg.get_security_groups(service, project_id)
self._drv_lbaas.update_lbaas_sg(service, sgs)
self._patch_kuryrnetworkpolicy_crd(knp, 'status',
{'podSelector': current_sel})
def _is_service_affected(self, service, affected_pods):
svc_namespace = service['metadata']['namespace']
svc_selector = service['spec'].get('selector')
svc_pods = driver_utils.get_pods({'selector': svc_selector},
svc_namespace).get('items')
return any(pod in svc_pods for pod in affected_pods)
def _is_egress_only_policy(self, policy):
policy_types = policy['spec'].get('policyTypes', [])
return (policy_types == ['Egress'] or
(policy['spec'].get('egress') and
not policy['spec'].get('ingress')))
def _get_policy_net_id(self, knp):
policy_ns = knp['metadata']['namespace']
kubernetes = clients.get_kubernetes_client()
try:
path = (f'{constants.K8S_API_CRD_NAMESPACES}/{policy_ns}/'
f'kuryrnetworks/{policy_ns}')
net_crd = kubernetes.get(path)
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception.")
raise
return net_crd['status']['netId']
def on_finalize(self, knp):
LOG.debug("Finalizing KuryrNetworkPolicy %s")
project_id = self._drv_project.get_project(knp)
pods_to_update = self._drv_policy.affected_pods(knp)
crd_sg = knp['status'].get('securityGroupId')
try:
policy = self._get_networkpolicy(knp['metadata']['annotations']
['networkPolicyLink'])
except exceptions.K8sResourceNotFound:
# NP is already gone, let's just try to clean up.
policy = None
if crd_sg:
for pod in pods_to_update:
if driver_utils.is_host_network(pod):
continue
pod_sgs = self._drv_pod_sg.get_security_groups(pod, project_id)
if crd_sg in pod_sgs:
pod_sgs.remove(crd_sg)
if not pod_sgs:
pod_sgs = CONF.neutron_defaults.pod_security_groups
if not pod_sgs:
raise cfg.RequiredOptError(
'pod_security_groups',
cfg.OptGroup('neutron_defaults'))
try:
self._drv_vif_pool.update_vif_sgs(pod, pod_sgs)
except os_exc.NotFoundException:
LOG.debug("Fail to update pod sgs."
" Retrying policy deletion.")
raise exceptions.ResourceNotReady(knp)
# ensure ports at the pool don't have the NP sg associated
try:
net_id = self._get_policy_net_id(knp)
self._drv_vif_pool.remove_sg_from_pools(crd_sg, net_id)
except exceptions.K8sResourceNotFound:
# Probably the network got removed already, we can ignore it.
pass
if (CONF.octavia_defaults.enforce_sg_rules and policy and
not self._is_egress_only_policy(policy)):
services = driver_utils.get_services(
knp['metadata']['namespace'])
for svc in services.get('items'):
if (not svc['spec'].get('selector') or not
self._is_service_affected(svc, pods_to_update)):
continue
sgs = self._drv_svc_sg.get_security_groups(svc, project_id)
self._drv_lbaas.update_lbaas_sg(svc, sgs)
self._drv_policy.delete_np_sg(crd_sg)
LOG.debug("Removing finalizers from KuryrNetworkPolicy and "
"NetworkPolicy.")
if policy:
self.k8s.remove_finalizer(policy,
constants.NETWORKPOLICY_FINALIZER)
self.k8s.remove_finalizer(knp, constants.NETWORKPOLICY_FINALIZER)

View File

@ -58,21 +58,24 @@ class PodLabelHandler(k8s_base.ResourceEventHandler):
# annotation to be moved to KuryrPort CRD.
return
current_pod_labels = pod['metadata'].get('labels')
previous_pod_labels = self._get_pod_labels(pod)
LOG.debug("Got previous pod labels from annotation: %r",
previous_pod_labels)
current_pod_info = (pod['metadata'].get('labels'),
pod['status'].get('podIP'))
previous_pod_info = self._get_pod_info(pod)
LOG.debug("Got previous pod info from annotation: %r",
previous_pod_info)
if current_pod_labels == previous_pod_labels:
if current_pod_info == previous_pod_info:
return
# FIXME(dulek): We should be able to just do create if only podIP
# changed, right?
crd_pod_selectors = self._drv_sg.update_sg_rules(pod)
project_id = self._drv_project.get_project(pod)
security_groups = self._drv_sg.get_security_groups(pod, project_id)
self._drv_vif_pool.update_vif_sgs(pod, security_groups)
try:
self._set_pod_labels(pod, current_pod_labels)
self._set_pod_info(pod, current_pod_info)
except k_exc.K8sResourceNotFound:
LOG.debug("Pod already deleted, no need to retry.")
return
@ -81,26 +84,30 @@ class PodLabelHandler(k8s_base.ResourceEventHandler):
services = driver_utils.get_services()
self._update_services(services, crd_pod_selectors, project_id)
def _get_pod_labels(self, pod):
def _get_pod_info(self, pod):
try:
annotations = pod['metadata']['annotations']
pod_labels_annotation = annotations[constants.K8S_ANNOTATION_LABEL]
pod_ip_annotation = annotations[constants.K8S_ANNOTATION_IP]
except KeyError:
return None
return None, None
pod_labels = jsonutils.loads(pod_labels_annotation)
return pod_labels
return pod_labels, pod_ip_annotation
def _set_pod_labels(self, pod, labels):
if not labels:
LOG.debug("Removing Label annotation: %r", labels)
annotation = None
def _set_pod_info(self, pod, info):
if not info[0]:
LOG.debug("Removing info annotations: %r", info)
annotation = None, info[1]
else:
annotation = jsonutils.dumps(labels, sort_keys=True)
LOG.debug("Setting Labels annotation: %r", annotation)
annotation = jsonutils.dumps(info[0], sort_keys=True), info[1]
LOG.debug("Setting info annotations: %r", annotation)
k8s = clients.get_kubernetes_client()
k8s.annotate(pod['metadata']['selfLink'],
{constants.K8S_ANNOTATION_LABEL: annotation},
{
constants.K8S_ANNOTATION_LABEL: annotation[0],
constants.K8S_ANNOTATION_IP: annotation[1]
},
resource_version=pod['metadata']['resourceVersion'])
def _has_vifs(self, pod):
@ -117,6 +124,5 @@ class PodLabelHandler(k8s_base.ResourceEventHandler):
if not driver_utils.service_matches_affected_pods(
service, crd_pod_selectors):
continue
sgs = self._drv_svc_sg.get_security_groups(service,
project_id)
sgs = self._drv_svc_sg.get_security_groups(service, project_id)
self._drv_lbaas.update_lbaas_sg(service, sgs)

View File

@ -12,15 +12,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from openstack import exceptions as os_exc
from oslo_config import cfg as oslo_cfg
from oslo_log import log as logging
from kuryr_kubernetes import clients
from kuryr_kubernetes import constants as k_const
from kuryr_kubernetes.controller.drivers import base as drivers
from kuryr_kubernetes.controller.drivers import utils as driver_utils
from kuryr_kubernetes import exceptions
from kuryr_kubernetes.handlers import k8s_base
from kuryr_kubernetes import utils
@ -36,99 +32,25 @@ class NetworkPolicyHandler(k8s_base.ResourceEventHandler):
def __init__(self):
super(NetworkPolicyHandler, self).__init__()
self._drv_policy = drivers.NetworkPolicyDriver.get_instance()
self._drv_project = drivers.NetworkPolicyProjectDriver.get_instance()
self._drv_vif_pool = drivers.VIFPoolDriver.get_instance(
specific_driver='multi_pool')
self._drv_vif_pool.set_vif_driver()
self._drv_pod_sg = drivers.PodSecurityGroupsDriver.get_instance()
self._drv_svc_sg = drivers.ServiceSecurityGroupsDriver.get_instance()
self._drv_lbaas = drivers.LBaaSDriver.get_instance()
self.k8s = clients.get_kubernetes_client()
def on_present(self, policy):
LOG.debug("Created or updated: %s", policy)
project_id = self._drv_project.get_project(policy)
pods_to_update = []
modified_pods = self._drv_policy.ensure_network_policy(policy,
project_id)
if modified_pods:
pods_to_update.extend(modified_pods)
self._drv_policy.ensure_network_policy(policy)
matched_pods = self._drv_policy.affected_pods(policy)
pods_to_update.extend(matched_pods)
# Put finalizer in if it's not there already.
self.k8s.add_finalizer(policy, k_const.NETWORKPOLICY_FINALIZER)
for pod in pods_to_update:
if driver_utils.is_host_network(pod):
continue
pod_sgs = self._drv_pod_sg.get_security_groups(pod, project_id)
self._drv_vif_pool.update_vif_sgs(pod, pod_sgs)
if (pods_to_update and
oslo_cfg.CONF.octavia_defaults.enforce_sg_rules and
not self._is_egress_only_policy(policy)):
# NOTE(ltomasbo): only need to change services if the pods that
# they point to are updated
services = driver_utils.get_services(
policy['metadata']['namespace'])
for service in services.get('items'):
# TODO(ltomasbo): Skip other services that are not affected
# by the policy
if (not service['spec'].get('selector') or not
self._is_service_affected(service, pods_to_update)):
continue
sgs = self._drv_svc_sg.get_security_groups(service,
project_id)
self._drv_lbaas.update_lbaas_sg(service, sgs)
def on_deleted(self, policy):
LOG.debug("Deleted network policy: %s", policy)
project_id = self._drv_project.get_project(policy)
pods_to_update = self._drv_policy.affected_pods(policy)
netpolicy_crd = self._drv_policy.get_kuryrnetpolicy_crd(policy)
if netpolicy_crd:
crd_sg = netpolicy_crd['spec'].get('securityGroupId')
for pod in pods_to_update:
if driver_utils.is_host_network(pod):
continue
pod_sgs = self._drv_pod_sg.get_security_groups(pod,
project_id)
if crd_sg in pod_sgs:
pod_sgs.remove(crd_sg)
if not pod_sgs:
pod_sgs = (
oslo_cfg.CONF.neutron_defaults.pod_security_groups)
if not pod_sgs:
raise oslo_cfg.RequiredOptError(
'pod_security_groups',
oslo_cfg.OptGroup('neutron_defaults'))
try:
self._drv_vif_pool.update_vif_sgs(pod, pod_sgs)
except os_exc.NotFoundException:
LOG.debug("Fail to update pod sgs."
" Retrying policy deletion.")
raise exceptions.ResourceNotReady(policy)
# ensure ports at the pool don't have the NP sg associated
net_id = self._get_policy_net_id(policy)
self._drv_vif_pool.remove_sg_from_pools(crd_sg, net_id)
self._drv_policy.release_network_policy(netpolicy_crd)
if (oslo_cfg.CONF.octavia_defaults.enforce_sg_rules and
not self._is_egress_only_policy(policy)):
services = driver_utils.get_services(
policy['metadata']['namespace'])
for svc in services.get('items'):
if (not svc['spec'].get('selector') or not
self._is_service_affected(svc, pods_to_update)):
continue
sgs = self._drv_svc_sg.get_security_groups(svc,
project_id)
self._drv_lbaas.update_lbaas_sg(svc, sgs)
def on_finalize(self, policy):
LOG.debug("Finalizing policy %s", policy)
if not self._drv_policy.release_network_policy(policy):
# KNP was not found, so we need to finalize on our own.
self.k8s.remove_finalizer(policy, k_const.NETWORKPOLICY_FINALIZER)
def is_ready(self, quota):
if not (utils.has_kuryr_crd(k_const.K8S_API_CRD_KURYRNETPOLICIES) and
self._check_quota(quota)):
if not (utils.has_kuryr_crd(k_const.K8S_API_CRD_KURYRNETWORKPOLICIES)
and self._check_quota(quota)):
LOG.error("Marking NetworkPolicyHandler as not ready.")
return False
return True
@ -137,29 +59,3 @@ class NetworkPolicyHandler(k8s_base.ResourceEventHandler):
if utils.has_limit(quota.security_groups):
return utils.is_available('security_groups', quota.security_groups)
return True
def _is_service_affected(self, service, affected_pods):
svc_namespace = service['metadata']['namespace']
svc_selector = service['spec'].get('selector')
svc_pods = driver_utils.get_pods({'selector': svc_selector},
svc_namespace).get('items')
return any(pod in svc_pods for pod in affected_pods)
def _get_policy_net_id(self, policy):
policy_ns = policy['metadata']['namespace']
kubernetes = clients.get_kubernetes_client()
try:
path = (f'{k_const.K8S_API_CRD_NAMESPACES}/{policy_ns}/'
f'kuryrnetworks/{policy_ns}')
net_crd = kubernetes.get(path)
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception.")
raise
return net_crd['status']['netId']
def _is_egress_only_policy(self, policy):
policy_types = policy['spec'].get('policyTypes', [])
return (policy_types == ['Egress'] or
(policy['spec'].get('egress') and
not policy['spec'].get('ingress')))

View File

@ -12,8 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import munch
from openstack import exceptions as os_exc
from unittest import mock
from kuryr_kubernetes.controller.drivers import network_policy
@ -75,8 +73,8 @@ class TestNetworkPolicyDriver(test_base.TestCase):
self._policy_uid = mock.sentinel.policy_uid
self._policy_link = mock.sentinel.policy_link
self._sg_id = mock.sentinel.sg_id
self._i_rules = [{'security_group_rule': {'id': ''}}]
self._e_rules = [{'security_group_rule': {'id': ''}}]
self._i_rules = [{'sgRule': {'id': ''}}]
self._e_rules = [{'sgRule': {'id': ''}}]
self._policy = {
'apiVersion': 'networking.k8s.io/v1',
@ -104,12 +102,46 @@ class TestNetworkPolicyDriver(test_base.TestCase):
[{'namespaceSelector': {
'matchLabels': {
'project': 'myproject'}}}]}],
'policyTypes': ['Ingress', 'Egress']
'policyTypes': ['Ingress', 'Egress'],
'podSelector': {},
}
}
self._crd = {
'metadata': {'name': mock.sentinel.name,
self.crd = {
'metadata': {'name': 'foobar',
'namespace': 'default',
'selfLink': mock.sentinel.selfLink},
'spec': {
'egressSgRules': [
{'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'egress',
'ethertype': 'IPv4',
'port_range_max': 5978,
'port_range_min': 5978,
'protocol': 'tcp',
}}],
'ingressSgRules': [
{'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'ingress',
'ethertype': 'IPv4',
'port_range_max': 6379,
'port_range_min': 6379,
'protocol': 'tcp',
}}],
'podSelector': {},
'policyTypes': self._policy['spec']['policyTypes']
},
'status': {
'securityGroupId': self._sg_id,
'securityGroupRules': [],
'podSelector': {},
}
}
self.old_crd = {
'metadata': {'name': 'np-foobar',
'namespace': 'default',
'selfLink': mock.sentinel.selfLink},
'spec': {
@ -135,6 +167,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
'security_group_id': self._sg_id,
'id': mock.sentinel.id
}}],
'podSelector': {},
'networkpolicy_spec': self._policy['spec'],
'securityGroupId': self._sg_id,
'securityGroupName': mock.sentinel.sg_name}}
@ -144,207 +177,57 @@ class TestNetworkPolicyDriver(test_base.TestCase):
self._driver = network_policy.NetworkPolicyDriver()
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd', return_value=False)
'_get_default_np_rules')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'create_security_group_rules_from_network_policy')
'_get_knp_crd', return_value=False)
@mock.patch.object(network_policy.NetworkPolicyDriver,
'update_security_group_rules_from_network_policy')
def test_ensure_network_policy(self, m_update, m_create, m_get_crd):
self._driver.ensure_network_policy(self._policy, self._project_id)
m_get_crd.assert_called_once_with(self._policy)
m_create.assert_called_once_with(self._policy, self._project_id)
m_update.assert_not_called()
@mock.patch.object(network_policy.NetworkPolicyDriver, 'affected_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver, 'namespaced_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd', return_value=True)
@mock.patch.object(network_policy.NetworkPolicyDriver,
'create_security_group_rules_from_network_policy')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'update_security_group_rules_from_network_policy')
def test_ensure_network_policy_with_existing_crd(
self, m_update, m_create, m_get_crd, m_namespaced, m_affected):
previous_selector = mock.sentinel.previous_selector
m_update.return_value = previous_selector
self._driver.ensure_network_policy(self._policy, self._project_id)
m_get_crd.assert_called_once_with(self._policy)
m_create.assert_not_called()
m_update.assert_called_once_with(self._policy)
m_affected.assert_called_once_with(self._policy, previous_selector)
m_namespaced.assert_not_called()
@mock.patch.object(network_policy.NetworkPolicyDriver, 'affected_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver, 'namespaced_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd', return_value=True)
@mock.patch.object(network_policy.NetworkPolicyDriver,
'create_security_group_rules_from_network_policy')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'update_security_group_rules_from_network_policy')
def test_ensure_network_policy_with_existing_crd_no_selector(
self, m_update, m_create, m_get_crd, m_namespaced, m_affected):
m_update.return_value = None
self._driver.ensure_network_policy(self._policy, self._project_id)
m_get_crd.assert_called_once_with(self._policy)
m_create.assert_not_called()
m_update.assert_called_once_with(self._policy)
m_affected.assert_not_called()
m_namespaced.assert_called_once_with(self._policy)
@mock.patch.object(network_policy.NetworkPolicyDriver, 'affected_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver, 'namespaced_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'create_security_group_rules_from_network_policy')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'update_security_group_rules_from_network_policy')
def test_ensure_network_policy_with_existing_crd_empty_selector(
self, m_update, m_create, m_get_crd, m_namespaced, m_affected):
previous_selector = {}
pod_selector = {'matchLabels': {'run': 'demo'}}
updated_policy = self._policy.copy()
updated_policy['spec']['podSelector'] = pod_selector
crd_with_empty_selector = self._crd.copy()
crd_with_empty_selector['spec']['podSelector'] = previous_selector
m_get_crd.return_value = crd_with_empty_selector
m_update.return_value = previous_selector
self._driver.ensure_network_policy(updated_policy, self._project_id)
m_get_crd.assert_called_once_with(updated_policy)
m_create.assert_not_called()
m_update.assert_called_once_with(updated_policy)
m_affected.assert_called_with(self._policy, previous_selector)
m_namespaced.assert_not_called()
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_default_np_rules')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_kuryrnetpolicy_crd')
'_create_knp_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules')
@mock.patch.object(utils, 'get_subnet_cidr')
def test_create_security_group_rules_from_network_policy(self, m_utils,
m_parse,
m_add_crd,
m_get_crd,
m_add_default):
self._driver.os_net.create_security_group.return_value = (
munch.Munch({'id': mock.sentinel.id,
'security_group_rules': []}))
def test_ensure_network_policy(self, m_utils, m_parse, m_add_crd,
m_get_crd, m_get_default):
m_utils.get_subnet_cidr.return_value = mock.sentinel.cidr
m_parse.return_value = (self._i_rules, self._e_rules)
self._driver.os_net.create_security_group_rule.return_value = (
munch.Munch({'id': mock.sentinel.id}))
self._driver.create_security_group_rules_from_network_policy(
self._policy, self._project_id)
self._driver.ensure_network_policy(
self._policy)
m_get_crd.assert_called_once()
m_add_crd.assert_called_once()
m_add_default.assert_called_once()
m_get_default.assert_called_once()
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_default_np_rules')
'_get_default_np_rules')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_kuryrnetpolicy_crd')
'_get_knp_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules')
@mock.patch.object(utils, 'get_subnet_cidr')
def test_create_security_group_rules_with_k8s_exc(self, m_utils, m_parse,
m_add_crd, m_get_crd,
m_add_default):
self._driver.os_net.create_security_group.return_value = (
munch.Munch({'id': mock.sentinel.id,
'security_group_rules': []}))
def test_ensure_network_policy_with_k8s_exc(self, m_utils, m_parse,
m_get_crd, m_get_default):
m_utils.get_subnet_cidr.return_value = mock.sentinel.cidr
m_parse.return_value = (self._i_rules, self._e_rules)
m_get_crd.side_effect = exceptions.K8sClientException
self._driver.os_net.create_security_group_rule.return_value = (
munch.Munch({'id': mock.sentinel.id}))
self.assertRaises(
exceptions.K8sClientException,
self._driver.create_security_group_rules_from_network_policy,
self._policy, self._project_id)
m_add_crd.assert_called_once()
m_add_default.assert_called_once()
self.assertRaises(exceptions.K8sClientException,
self._driver.ensure_network_policy, self._policy)
m_get_default.assert_called_once()
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_default_np_rules')
'_get_default_np_rules')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_kuryrnetpolicy_crd')
'_get_knp_crd', return_value=None)
@mock.patch.object(network_policy.NetworkPolicyDriver, '_create_knp_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules')
@mock.patch.object(utils, 'get_subnet_cidr')
def test_create_security_group_rules_error_add_crd(self, m_utils, m_parse,
m_add_crd, m_get_crd,
m_add_default):
self._driver.os_net.create_security_group.return_value = (
munch.Munch({'id': mock.sentinel.id,
'security_group_rules': []}))
def test_ensure_network_policy_error_add_crd(
self, m_utils, m_parse, m_add_crd, m_get_crd, m_get_default):
m_utils.get_subnet_cidr.return_value = mock.sentinel.cidr
m_parse.return_value = (self._i_rules, self._e_rules)
m_add_crd.side_effect = exceptions.K8sClientException
self._driver.os_net.create_security_group_rule.return_value = (
munch.Munch({'id': mock.sentinel.id}))
self.assertRaises(
exceptions.K8sClientException,
self._driver.create_security_group_rules_from_network_policy,
self._policy, self._project_id)
m_get_crd.assert_not_called()
m_add_default.assert_called_once()
def test_create_security_group_rules_with_n_exc(self):
self._driver.os_net.create_security_group.side_effect = (
os_exc.SDKException())
self.assertRaises(
os_exc.SDKException,
self._driver.create_security_group_rules_from_network_policy,
self._policy, self._project_id)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules')
def test_update_security_group_rules(self, m_parse, m_get_crd,
m_create_sgr):
policy = self._policy.copy()
policy['spec']['podSelector'] = {'matchLabels': {'test': 'test'}}
m_get_crd.return_value = self._crd
m_parse.return_value = (self._i_rules, self._e_rules)
self._driver.update_security_group_rules_from_network_policy(
policy)
m_parse.assert_called_with(policy, self._sg_id)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules')
def test_update_security_group_rules_with_k8s_exc(self, m_parse, m_get_crd,
m_create_sgr):
self._driver.kubernetes.patch_crd.side_effect = (
exceptions.K8sClientException())
m_get_crd.return_value = self._crd
m_parse.return_value = (self._i_rules, self._e_rules)
self.assertRaises(
exceptions.K8sClientException,
self._driver.update_security_group_rules_from_network_policy,
self._policy)
m_parse.assert_called_with(self._policy, self._sg_id)
self.assertRaises(exceptions.K8sClientException,
self._driver.ensure_network_policy, self._policy)
m_get_crd.assert_called()
m_get_default.assert_called_once()
def test_get_namespaces(self):
namespace_selector = {'namespaceSelector': {
@ -363,6 +246,13 @@ class TestNetworkPolicyDriver(test_base.TestCase):
self.assertEqual([], resp)
self.kubernetes.get.assert_called_once()
def test_get_from_old_crd(self):
knp = self._driver.get_from_old_crd(self.old_crd)
self.assertEqual(self.crd['spec'], knp['spec'])
self.assertEqual(self.crd['status'], knp['status'])
for k in ['name', 'namespace']:
self.assertEqual(self.crd['metadata'][k], knp['metadata'][k])
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_get_resource_details')
@ -377,7 +267,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
namespace = 'myproject'
m_get_namespaces.return_value = [get_namespace_obj()]
m_get_resource_details.return_value = subnet_cidr, namespace
self._driver.parse_network_policy_rules(self._policy, self._sg_id)
self._driver.parse_network_policy_rules(self._policy)
m_get_namespaces.assert_called()
m_get_resource_details.assert_called()
m_create.assert_called()
@ -391,12 +281,12 @@ class TestNetworkPolicyDriver(test_base.TestCase):
policy = self._policy.copy()
policy['spec']['ingress'] = [{}]
policy['spec']['egress'] = [{}]
self._driver.parse_network_policy_rules(policy, self._sg_id)
self._driver.parse_network_policy_rules(policy)
m_get_ns.assert_not_called()
calls = [mock.call(self._sg_id, 'ingress', ethertype='IPv4'),
mock.call(self._sg_id, 'ingress', ethertype='IPv6'),
mock.call(self._sg_id, 'egress', ethertype='IPv4'),
mock.call(self._sg_id, 'egress', ethertype='IPv6')]
calls = [mock.call('ingress', ethertype='IPv4'),
mock.call('ingress', ethertype='IPv6'),
mock.call('egress', ethertype='IPv4'),
mock.call('egress', ethertype='IPv6')]
m_create.assert_has_calls(calls)
@mock.patch.object(network_policy.NetworkPolicyDriver,
@ -408,7 +298,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
[{'port': 6379, 'protocol': 'TCP'}]}]
policy['spec']['egress'] = [{'ports':
[{'port': 6379, 'protocol': 'TCP'}]}]
self._driver.parse_network_policy_rules(policy, self._sg_id)
self._driver.parse_network_policy_rules(policy)
m_create_all_pods_sg_rules.assert_called()
@mock.patch.object(network_policy.NetworkPolicyDriver,
@ -429,7 +319,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
'TCP'}],
'to': [{'ipBlock':
{'cidr': '10.0.0.0/24'}}]}]
self._driver.parse_network_policy_rules(policy, self._sg_id)
self._driver.parse_network_policy_rules(policy)
m_create_sg_rule.assert_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@ -450,38 +340,19 @@ class TestNetworkPolicyDriver(test_base.TestCase):
selectors = {'namespaceSelector': {
'matchLabels': {
'project': 'myproject'}}}
policy['spec']['egress'] = [
{'to':
[selectors]}]
policy['spec']['ingress'] = [
{'from':
[selectors]}]
selectors = {'namespace_selector': selectors['namespaceSelector']}
self._driver.parse_network_policy_rules(policy, self._sg_id)
policy['spec']['egress'] = [{'to': [selectors]}]
policy['spec']['ingress'] = [{'from': [selectors]}]
self._driver.parse_network_policy_rules(policy)
m_get_namespaces.assert_called()
m_get_resource_details.assert_called()
calls = [mock.call(self._sg_id, 'ingress', port_range_min=1,
calls = [mock.call('ingress', port_range_min=1,
port_range_max=65535, cidr=subnet_cidr,
namespace=namespace),
mock.call(self._sg_id, 'egress', port_range_min=1,
mock.call('egress', port_range_min=1,
port_range_max=65535, cidr=subnet_cidr,
namespace=namespace)]
m_create.assert_has_calls(calls)
def test_knps_on_namespace(self):
self.kubernetes.get.return_value = {'items': ['not-empty']}
namespace = 'test1'
resp = self._driver.knps_on_namespace(namespace)
self.assertTrue(resp)
def test_knps_on_namespace_empty(self):
self.kubernetes.get.return_value = {'items': []}
namespace = 'test1'
resp = self._driver.knps_on_namespace(namespace)
self.assertFalse(resp)
@mock.patch.object(network_policy.NetworkPolicyDriver, 'namespaced_pods')
def test_affected_pods(self, m_namespaced):
self._driver.affected_pods(self._policy)
@ -509,19 +380,10 @@ class TestNetworkPolicyDriver(test_base.TestCase):
self.assertEqual([], resp)
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_del_kuryrnetpolicy_crd', return_value=False)
'_del_knp_crd', return_value=False)
def test_release_network_policy(self, m_del_crd):
self._driver.release_network_policy(self._crd)
self.neutron.delete_security_group.assert_called_once_with(
self._crd['spec']['securityGroupId'])
m_del_crd.assert_called_once_with(self._crd['metadata']['name'],
self._crd['metadata']['namespace'])
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_del_kuryrnetpolicy_crd', return_value=False)
def test_release_network_policy_removed_crd(self, m_del_crd):
self._driver.release_network_policy(None)
m_del_crd.assert_not_called()
self._driver.release_network_policy(self.crd)
m_del_crd.assert_called_once_with(self.crd)
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_create_sg_rules_with_container_ports')
@ -543,8 +405,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id,
direction,
self._driver._create_sg_rule_body_on_text_port(direction,
port,
resources,
crd_rules,
@ -577,8 +438,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id,
direction,
self._driver._create_sg_rule_body_on_text_port(direction,
port,
resources,
crd_rules,
@ -600,7 +460,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_create_sgr):
def _create_sgr_cont(container_ports, allow_all, resource,
matched_pods, crd_rules, sg_id, direction, port,
matched_pods, crd_rules, direction, port,
pod_selector=None, policy_namespace=None):
matched_pods[container_ports[0][1]] = 'foo'
@ -617,8 +477,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id,
direction,
self._driver._create_sg_rule_body_on_text_port(direction,
port,
resources,
crd_rules,
@ -629,7 +488,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.assert_called_with(pod_selector, namespace)
m_get_ports.assert_called_with(pod, port)
calls = [mock.call(self._sg_id, direction, container_ports[0][1],
calls = [mock.call(direction, container_ports[0][1],
protocol=port['protocol'], ethertype=e,
pods='foo') for e in ('IPv4', 'IPv6')]
@ -656,8 +515,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id,
direction,
self._driver._create_sg_rule_body_on_text_port(direction,
port,
resources,
crd_rules,
@ -685,8 +543,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id,
direction,
self._driver._create_sg_rule_body_on_text_port(direction,
port,
resources,
crd_rules,
@ -695,8 +552,8 @@ class TestNetworkPolicyDriver(test_base.TestCase):
allow_all=True)
m_get_ports.assert_called_with(resources[0], port)
m_create_sgr.assert_called_once_with(self._sg_id, 'egress', None,
cidr=mock.ANY, protocol='TCP')
m_create_sgr.assert_called_once_with('egress', None, cidr=mock.ANY,
protocol='TCP')
self.assertEqual(len(crd_rules), 1)
@mock.patch('kuryr_kubernetes.utils.get_subnet_cidr')
@ -731,8 +588,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id,
direction,
self._driver._create_sg_rule_body_on_text_port(direction,
port,
resources,
crd_rules,
@ -741,10 +597,10 @@ class TestNetworkPolicyDriver(test_base.TestCase):
allow_all=True)
m_get_ports.assert_called_with(resources[0], port)
calls = [mock.call(self._sg_id, direction, container_ports[0][1],
calls = [mock.call(direction, container_ports[0][1],
protocol=port['protocol'], ethertype=e,
pods='foo') for e in ('IPv4', 'IPv6')]
calls.append(mock.call(self._sg_id, direction, container_ports[0][1],
calls.append(mock.call(direction, container_ports[0][1],
protocol=port['protocol'],
cidr='10.0.0.128/26'))
m_create_sgr.assert_has_calls(calls)
@ -758,19 +614,18 @@ class TestNetworkPolicyDriver(test_base.TestCase):
direction = 'ingress'
rules = []
self._driver._create_all_pods_sg_rules(port, self._sg_id, direction,
rules, '', None)
self._driver._create_all_pods_sg_rules(port, direction, rules, '',
None)
self.assertEqual(len(rules), 2)
def test__create_default_sg_rule(self):
for direction in ('ingress', 'egress'):
rules = []
self._driver._create_default_sg_rule(self._sg_id, direction, rules)
self._driver._create_default_sg_rule(direction, rules)
self.assertEqual(len(rules), 2)
self.assertListEqual(rules, [{'security_group_rule': {
self.assertListEqual(rules, [{'sgRule': {
'ethertype': e,
'security_group_id': self._sg_id,
'direction': direction,
'description': 'Kuryr-Kubernetes NetPolicy SG rule'
}} for e in ('IPv4', 'IPv6')])

View File

@ -66,7 +66,7 @@ def get_sg_rule():
pod_ip = get_match_crd_pod_obj()['status'].get('podIP')
return {
"namespace": 'dev',
"security_group_rule": {
"sgRule": {
"description": "Kuryr-Kubernetes NetPolicy SG rule",
"direction": "ingress",
"ethertype": "IPv4",
@ -80,7 +80,7 @@ def get_sg_rule():
def get_matched_crd_obj():
return {
"kind": "KuryrNetPolicy",
"kind": "KuryrNetworkPolicy",
"metadata": {"name": "np-test-network-policy",
"namespace": "default"},
"spec": {
@ -159,7 +159,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
'selfLink': mock.sentinel.selfLink},
'spec': {
'egressSgRules': [
{'security_group_rule':
{'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'egress',
'ethertype': 'IPv4',
@ -170,7 +170,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
'id': mock.sentinel.id
}}],
'ingressSgRules': [
{'security_group_rule':
{'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'ingress',
'ethertype': 'IPv4',
@ -189,16 +189,18 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
'production']}],
'matchLabels': {
'run': 'demo'
}},
}}},
'status': {
'securityGroupId': self._sg_id,
'securityGroupName': mock.sentinel.sg_name}}
},
}
self._crd2 = {
'metadata': {'name': mock.sentinel.name3,
'selfLink': mock.sentinel.selfLink},
'spec': {
'ingressSgRules': [
{'security_group_rule':
{'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'ingress',
'ethertype': 'IPv4',
@ -208,25 +210,14 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
'security_group_id': self._sg_id2,
'id': mock.sentinel.id
}}],
'podSelector': {},
'podSelector': {}},
'status': {
'securityGroupId': self._sg_id2,
'securityGroupName': mock.sentinel.sg_name}}
self._crds = {
"apiVersion": "v1",
"items": [self._crd],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": mock.sentinel.selfLink}}
self._crds = [self._crd]
self._multiple_crds = {
"apiVersion": "v1",
"items": [self._crd, self._crd2],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": mock.sentinel.selfLink}}
self._multiple_crds = [self._crd, self._crd2]
self._pod = {
'apiVersion': 'v1',
@ -304,7 +295,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
self._crd_sg_id = mock.sentinel.crd_sg_id
self._sg_rule_body = {
'security_group_rule': {
'sgRule': {
'direction': 'ingress',
'protocol': 'tcp',
'description': 'Kuryr-Kubernetes NetPolicy SG rule',
@ -323,7 +314,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
"spec": {
"egressSgRules": [],
"ingressSgRules": [{
"security_group_rule": {
"sgRule": {
"description": "Kuryr-Kubernetes NetPolicy SG rule",
"direction": "ingress",
"ethertype": "IPv4",
@ -348,20 +339,12 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
"podSelector": {"matchLabels": {"app": "demo"}},
"securityGroupId": self._crd_sg_id}}
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule_body')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector', return_value=True)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pod_ip')
def test__create_sg_rules(self, m_get_pod_ip,
m_match_selector,
m_create_sg_rule_body,
m_create_sg_rule):
m_create_sg_rule_body.return_value = self._sg_rule_body
m_match_selector):
sgr_id = mock.sentinel.sgr_id
m_create_sg_rule.return_value = sgr_id
crd = get_crd_obj_with_all_selectors()
pod = get_match_crd_pod_obj()
m_get_pod_ip.return_value = pod['status'].get('podIP')
@ -370,80 +353,58 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
policy = crd['spec']['networkpolicy_spec']
rule_list = policy.get('ingress', None)
crd_rules = crd['spec'].get('ingressSgRules')
pod_ns = pod['metadata']['namespace']
for rule_block in rule_list:
for rule in rule_block.get('from', []):
pod_selector = rule.get('podSelector')
matched = network_policy_security_groups._create_sg_rules(
crd, pod, pod_selector, rule_block,
crd_rules, 'ingress', matched, pod_ns)
crd, pod, pod_selector, rule_block, 'ingress', matched)
new_sg_rule['namespace'] = pod_ns
new_sg_rule['security_group_rule']['id'] = sgr_id
new_sg_rule['sgRule']['id'] = sgr_id
m_match_selector.assert_called_once_with(
pod_selector, pod['metadata']['labels'])
m_get_pod_ip.assert_called_once_with(pod)
m_create_sg_rule_body.assert_called_once()
m_create_sg_rule.assert_called_once()
self.assertEqual([new_sg_rule], crd_rules)
self.assertEqual(matched, True)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_pod_ip')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector', return_value=False)
def test__create_sg_rules_no_match(self, m_match_selector,
m_get_pod_ip):
def test__create_sg_rules_no_match(self, m_match_selector, m_get_pod_ip):
crd = get_crd_obj_with_all_selectors()
pod = self._pod2
policy = crd['spec']['networkpolicy_spec']
rule_list = policy.get('ingress', None)
crd_rules = crd['spec'].get('ingressSgRules')
for rule_block in rule_list:
for rule in rule_block.get('from', []):
pod_selector = rule.get('podSelector')
matched = network_policy_security_groups._create_sg_rules(
crd, pod, pod_selector, rule_block,
crd_rules, 'ingress', False, self._namespace)
crd, pod, pod_selector, rule_block, 'ingress', False)
self.assertEqual(matched, False)
@mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._bump_networkpolicy')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'patch_kuryrnetworkpolicy_crd')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'delete_security_group_rule')
'get_kuryrnetworkpolicy_crds')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pod_ip')
def test_delete_sg_rules(self, m_get_pod_ip, m_delete_sg_rule,
m_get_knp_crds, m_patch_kuryrnetworkpolicy_crd):
def test_delete_sg_rules(self, m_get_pod_ip, m_get_knp_crds, m_bump):
crd = self._crd_with_rule
i_rule = crd['spec'].get('ingressSgRules')[0]
sgr_id = i_rule['security_group_rule'].get('id')
m_get_pod_ip.return_value = self._pod_ip
m_get_knp_crds.return_value = {
"apiVersion": "v1",
"items": [crd],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": mock.sentinel.selfLink}}
i_rules = e_rules = []
m_get_knp_crds.return_value = [crd]
pod = self._pod_dev_namespace
self._driver.delete_sg_rules(pod)
m_get_knp_crds.assert_called_once()
m_get_pod_ip.assert_called_once_with(pod)
m_delete_sg_rule.assert_called_once_with(sgr_id)
m_patch_kuryrnetworkpolicy_crd.assert_called_with(
crd, i_rules, e_rules, crd['spec'].get('podSelector'))
m_bump.assert_called_once()
@mock.patch('kuryr_kubernetes.config.CONF')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
'get_kuryrnetworkpolicy_crds')
def test_get_sgs_for_pod_without_label(self, m_get_crds, m_cfg):
m_get_crds.return_value = self._crds
sg_list = [str(mock.sentinel.sg_id)]
@ -460,7 +421,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_labels')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
'get_kuryrnetworkpolicy_crds')
def test_get_sgs_for_pod_with_label(self, m_get_crds, m_match_labels,
m_match_expressions):
m_get_crds.return_value = self._crds
@ -474,7 +435,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
self._crd['spec']['podSelector']['matchExpressions'], pod_labels)
m_match_labels.assert_called_once_with(
self._crd['spec']['podSelector']['matchLabels'], pod_labels)
self.assertEqual(resp, [str(self._sg_id)])
self.assertEqual(resp, [self._sg_id])
@mock.patch('kuryr_kubernetes.config.CONF')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
@ -482,7 +443,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_labels')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
'get_kuryrnetworkpolicy_crds')
def test_get_sgs_for_pod_with_label_no_match(self, m_get_crds,
m_match_labels,
m_match_expressions, m_cfg):
@ -503,9 +464,9 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
self.assertEqual(sg_list, sgs)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
'get_kuryrnetworkpolicy_crds')
def test_get_sgs_no_crds(self, m_get_crds):
m_get_crds.return_value = {"items": []}
m_get_crds.return_value = []
cfg.CONF.set_override('pod_security_groups', [],
group='neutron_defaults')
@ -519,7 +480,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_labels')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
'get_kuryrnetworkpolicy_crds')
def test_get_sgs_multiple_crds(self, m_get_crds, m_match_labels,
m_match_expressions):
m_match_expressions.return_value = True
@ -529,87 +490,64 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
resp = self._driver.get_security_groups(self._pod, self._project_id)
m_get_crds.assert_called_once_with(namespace=self._namespace)
self.assertEqual([str(self._sg_id), str(self._sg_id2)], resp)
self.assertEqual([self._sg_id, self._sg_id2], resp)
@mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._bump_networkpolicy')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'patch_kuryrnetworkpolicy_crd')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'delete_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
def test_delete_namespace_sg_rule(self, m_get_knp_crd, m_delete_sg_rule,
m_patch_kuryrnetworkpolicy_crd):
'get_kuryrnetworkpolicy_crds')
def test_delete_namespace_sg_rule(self, m_get_knp_crd, m_bump):
cls = network_policy_security_groups.NetworkPolicySecurityGroupsDriver
m_driver = mock.MagicMock(spec=cls)
i_rule = get_matched_crd_obj()['spec']['ingressSgRules'][0]
sg_rule_id = i_rule.get('security_group_rule')['id']
m_get_knp_crd.return_value = {"items": [get_matched_crd_obj()]}
m_get_knp_crd.return_value = [get_matched_crd_obj()]
cls.delete_namespace_sg_rules(m_driver, get_match_crd_namespace_obj())
m_get_knp_crd.assert_called_once()
m_delete_sg_rule.assert_called_once_with(sg_rule_id)
m_patch_kuryrnetworkpolicy_crd.assert_called_once()
m_bump.assert_called_once()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'patch_kuryrnetworkpolicy_crd')
@mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._bump_networkpolicy')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'delete_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
def test_delete_namespace_sg_rule_no_match(self, m_get_knp_crd,
m_delete_sg_rule,
m_patch_kuryrnetworkpolicy_crd):
'get_kuryrnetworkpolicy_crds')
def test_delete_namespace_sg_rule_no_match(
self, m_get_knp_crd, m_delete_sg_rule, m_bump):
cls = network_policy_security_groups.NetworkPolicySecurityGroupsDriver
m_driver = mock.MagicMock(spec=cls)
m_get_knp_crd.return_value = {"items": [get_matched_crd_obj()]}
m_get_knp_crd.return_value = [get_matched_crd_obj()]
cls.delete_namespace_sg_rules(m_driver,
get_no_match_crd_namespace_obj())
m_get_knp_crd.assert_called_once()
m_delete_sg_rule.assert_not_called()
m_patch_kuryrnetworkpolicy_crd.assert_not_called()
m_bump.assert_not_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_pods')
@mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._create_sg_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace_subnet_cidr')
def test__parse_rules(self, m_get_ns_subnet_cidr, m_match_selector,
m_create_sg_rule, m_get_pods):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pods')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.match_selector')
def test__parse_rules(self, m_match_selector, m_get_pods):
crd = get_crd_obj_no_match()
policy = crd['spec']['networkpolicy_spec']
i_rule = policy.get('ingress')[0]
ns_selector = i_rule['from'][0].get('namespaceSelector')
ns = get_match_crd_namespace_obj()
m_get_ns_subnet_cidr.return_value = '10.0.2.0/26'
m_match_selector.return_value = True
m_create_sg_rule.return_value = get_sg_rule()
matched, rules = network_policy_security_groups._parse_rules(
'ingress', crd, namespace=ns)
matched = network_policy_security_groups._parse_rules(
'ingress', crd, policy, namespace=ns)
m_get_ns_subnet_cidr.assert_called_once_with(ns)
m_match_selector.assert_called_once_with(ns_selector,
ns['metadata']['labels'])
m_create_sg_rule.assert_called_once()
self.assertEqual(matched, True)
self.assertEqual(rules, [get_sg_rule()])
@mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._create_sg_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector')
def test__parse_rules_no_match(self, m_match_selector,
m_create_sg_rule):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.match_selector')
def test__parse_rules_no_match(self, m_match_selector):
crd = get_crd_obj_no_match()
policy = crd['spec']['networkpolicy_spec']
i_rule = policy.get('ingress')[0]
@ -618,26 +556,19 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
m_match_selector.return_value = False
matched, rules = network_policy_security_groups._parse_rules(
'ingress', crd, namespace=ns)
matched = network_policy_security_groups._parse_rules(
'ingress', crd, policy, namespace=ns)
m_match_selector.assert_called_once_with(ns_selector,
ns['metadata']['labels'])
m_create_sg_rule.assert_not_called()
self.assertEqual(matched, False)
self.assertEqual(rules, [])
@mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._create_sg_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_pod_ip')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_pods')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector')
def test__parse_rules_all_selectors(self, m_match_selector, m_get_pods,
m_get_pod_ip, m_create_sg_rule):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pods')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pod_ip')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.match_selector')
def test__parse_rules_all_selectors(self, m_match_selector, m_get_pod_ip,
m_get_pods):
crd = get_crd_obj_with_all_selectors()
policy = crd['spec']['networkpolicy_spec']
i_rule = policy.get('ingress')[0]
@ -647,22 +578,19 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
pod = get_match_crd_pod_obj()
m_match_selector.return_value = True
m_get_pods.return_value = {"items": [pod]}
m_get_pod_ip.return_value = pod['status']['podIP']
m_create_sg_rule.return_value = get_sg_rule()
m_get_pods.return_value = {"items": [pod]}
matched, rules = network_policy_security_groups._parse_rules(
'ingress', crd, namespace=ns)
matched = network_policy_security_groups._parse_rules(
'ingress', crd, policy, namespace=ns)
m_match_selector.assert_called_once_with(ns_selector,
ns['metadata']['labels'])
m_get_pods.assert_called_once_with(pod_selector,
ns['metadata']['name'])
m_get_pod_ip.assert_called_once_with(pod)
m_create_sg_rule.assert_called_once()
self.assertEqual(matched, True)
self.assertEqual(rules, [get_sg_rule()])
@mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._parse_selectors_on_pod')
@ -670,124 +598,26 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
no_selector = None
matched_selector = True
pod = mock.sentinel.pod
final_crd_rules = [mock.sentinel.crd_rules]
m_parse_selectors_on_pod.side_effect = [
(matched_selector, final_crd_rules)]*2
m_parse_selectors_on_pod.side_effect = [matched_selector]*2
initial_crd_rules = []
direction = "ingress"
pod_selector = mock.sentinel.pod_selector
namespace_selector = mock.sentinel.namespace_selector
rule_block = {'from': [{'podSelector': pod_selector},
{'namespaceSelector': namespace_selector}]}
crd = {"spec": {
"ingressSgRules": initial_crd_rules,
"networkpolicy_spec": {
"ingress": [rule_block],
"policyTypes": [
"Ingress"
]}, }}
policy = {
"ingress": [rule_block],
"policyTypes": ["Ingress"]
}
crd = {"spec": {"ingressSgRules": []}}
matched, rules = network_policy_security_groups._parse_rules(
direction, crd, pod=pod)
matched = network_policy_security_groups._parse_rules(
direction, crd, policy, pod=pod)
calls = [mock.call(crd, pod, pod_selector, no_selector, rule_block,
initial_crd_rules, direction, not matched_selector),
direction, not matched_selector),
mock.call(crd, pod, no_selector, namespace_selector,
rule_block, final_crd_rules, direction,
matched_selector)]
rule_block, direction, matched_selector)]
m_parse_selectors_on_pod.assert_has_calls(calls)
self.assertEqual(matched, matched_selector)
self.assertEqual(rules, final_crd_rules)
class TestNetworkPolicySecurityGroupsFunctions(test_base.TestCase):
def setUp(self):
super().setUp()
self.kubernetes = self.useFixture(k_fix.MockK8sClient()).client
self.npsg = network_policy_security_groups
self.sg_id = mock.sentinel.sg_id
self.crd = {
'spec': {
'ingressSgRules': [],
'networkpolicy_spec': {
'ingress': [],
'policyTypes': ['Ingress']
}
},
'metadata': {'namespace': 'ns'}
}
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule_body')
def test__apply_sg_rules_on_matched_pods_empty_match(self, m_create_sgrb,
m_create_sgr):
self.npsg._apply_sg_rules_on_matched_pods({}, self.sg_id, 'ingress',
'ns', 'port', 'crd_rules')
m_create_sgrb.assert_not_called()
m_create_sgr.assert_not_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace_subnet_cidr')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule_body')
def test__apply_sg_rules_on_matched_pods_not_all(self, m_create_sgrb,
m_create_sgr, m_get_ns,
m_get_ns_sub_cidr):
pod = mock.sentinel.pod
ns = mock.sentinel.ns
port = {'protocol': 'TCP', 'port': 22}
matched_pods = {'container_port': [pod]}
m_get_ns.return_value = ns
m_create_sgrb.return_value = {'security_group_rule': {}}
crd_rules = []
direction = 'ingress'
self.npsg._apply_sg_rules_on_matched_pods(matched_pods, self.sg_id,
direction, 'ns', port,
crd_rules)
m_get_ns_sub_cidr.assert_called_once_with(ns)
m_create_sgrb.assert_called_once_with(self.sg_id, direction,
'container_port',
protocol=mock.ANY, cidr=mock.ANY,
pods=[pod])
m_create_sgr.assert_called_once()
self.assertEqual(len(crd_rules), 1)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace_subnet_cidr')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
def test__apply_sg_rules_on_matched_pods_all(self, m_create_sgr, m_get_ns,
m_get_ns_sub_cidr):
pod = mock.sentinel.pod
ns = mock.sentinel.ns
port = {'protocol': 'TCP', 'port': 22}
matched_pods = {'container_port': [pod]}
m_get_ns.return_value = ns
crd_rules = []
direction = 'ingress'
self.npsg._apply_sg_rules_on_matched_pods(matched_pods, self.sg_id,
direction, 'ns', port,
crd_rules, allow_all=True)
self.assertEqual(m_create_sgr.call_count, 2)
self.assertEqual(len(crd_rules), 2)
self.assertListEqual([r['security_group_rule']['ethertype']
for r in crd_rules], ['IPv4', 'IPv6'])

View File

@ -0,0 +1,112 @@
# Copyright 2020 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
from kuryr_kubernetes.controller.drivers import base as drivers
from kuryr_kubernetes.controller.handlers import kuryrnetworkpolicy
from kuryr_kubernetes.tests import base as test_base
class TestPolicyHandler(test_base.TestCase):
@mock.patch.object(drivers.LBaaSDriver, 'get_instance')
@mock.patch.object(drivers.NetworkPolicyDriver, 'get_instance')
@mock.patch('kuryr_kubernetes.clients.get_kubernetes_client')
@mock.patch('kuryr_kubernetes.clients.get_network_client')
@mock.patch('kuryr_kubernetes.clients.get_loadbalancer_client')
def setUp(self, m_get_os_lb, m_get_os_net, m_get_k8s, m_get_np,
m_get_lbaas):
super(TestPolicyHandler, self).setUp()
self._project_id = mock.sentinel.project_id
self._policy_name = 'np-test'
self._policy_uid = mock.sentinel.policy_uid
self._policy_link = mock.sentinel.policy_link
self._policy = {
'apiVersion': 'networking.k8s.io/v1',
'kind': 'NetworkPolicy',
'metadata': {
'name': self._policy_name,
'resourceVersion': '2259309',
'generation': 1,
'creationTimestamp': '2018-09-18T14:09:51Z',
'namespace': 'default',
'annotations': {},
'selfLink': self._policy_link,
'uid': self._policy_uid
},
'spec': {
'egress': [{'ports': [{'port': 5978, 'protocol': 'TCP'}]}],
'ingress': [{'ports': [{'port': 6379, 'protocol': 'TCP'}]}],
'policyTypes': ['Ingress', 'Egress']
}
}
self.k8s = mock.Mock()
m_get_k8s.return_value = self.k8s
self.m_get_k8s = m_get_k8s
self.os_net = mock.Mock()
m_get_os_net.return_value = self.os_net
self.m_get_os_net = m_get_os_net
self.np_driver = mock.Mock()
m_get_np.return_value = self.np_driver
self.m_get_np = m_get_np
self.lbaas_driver = mock.Mock()
m_get_lbaas.return_value = self.lbaas_driver
self.m_get_lbaas = m_get_lbaas
self.k8s.get.return_value = {}
self.handler = kuryrnetworkpolicy.KuryrNetworkPolicyHandler()
def _get_knp_obj(self):
knp_obj = {
'apiVersion': 'openstack.org/v1',
'kind': 'KuryrNetworkPolicy',
'metadata': {
'name': 'np-test-network-policy',
'namespace': 'test-1',
},
'spec': {
'securityGroupId': 'c1ac16f5-e198-4628-9d84-253c6001be8e',
'securityGroupName': 'sg-test-network-policy'
}}
return knp_obj
def test_init(self):
self.m_get_k8s.assert_called_once()
self.m_get_np.assert_called_once()
self.assertEqual(self.np_driver, self.handler._drv_policy)
self.assertEqual(self.k8s, self.handler.k8s)
self.assertEqual(self.os_net, self.handler.os_net)
self.assertEqual(self.lbaas_driver, self.handler._drv_lbaas)
def test_convert(self):
self.k8s.get.return_value = {'items': [{
'metadata': {
'selfLink': mock.sentinel.old_self_link,
'namespace': 'ns',
}
}]}
self.np_driver.get_from_old_crd.return_value = mock.sentinel.new_crd
self.handler._convert_old_crds()
self.k8s.post.assert_called_once_with(mock.ANY, mock.sentinel.new_crd)
self.k8s.delete.assert_called_once_with(mock.sentinel.old_self_link)

View File

@ -47,8 +47,8 @@ class TestPodLabelHandler(test_base.TestCase):
self._get_project = self._handler._drv_project.get_project
self._get_security_groups = self._handler._drv_sg.get_security_groups
self._set_vif_driver = self._handler._drv_vif_pool.set_vif_driver
self._get_pod_labels = self._handler._get_pod_labels
self._set_pod_labels = self._handler._set_pod_labels
self._get_pod_info = self._handler._get_pod_info
self._set_pod_info = self._handler._set_pod_info
self._has_vifs = self._handler._has_vifs
self._update_vif_sgs = self._handler._drv_vif_pool.update_vif_sgs
@ -81,16 +81,16 @@ class TestPodLabelHandler(test_base.TestCase):
def test_on_present(self, m_get_services):
m_get_services.return_value = {"items": []}
self._has_vifs.return_value = True
self._get_pod_labels.return_value = {'test1': 'test'}
self._get_pod_info.return_value = ({'test1': 'test'}, '192.168.0.1')
p_label.PodLabelHandler.on_present(self._handler, self._pod)
self._has_vifs.assert_called_once_with(self._pod)
self._get_pod_labels.assert_called_once_with(self._pod)
self._get_pod_info.assert_called_once_with(self._pod)
self._get_project.assert_called_once()
self._get_security_groups.assert_called_once()
self._update_vif_sgs.assert_called_once_with(self._pod, [self._sg_id])
self._set_pod_labels.assert_called_once_with(self._pod, None)
self._set_pod_info.assert_called_once_with(self._pod, (None, None))
def test_on_present_no_state(self):
self._has_vifs.return_value = False
@ -99,27 +99,29 @@ class TestPodLabelHandler(test_base.TestCase):
self.assertIsNone(resp)
self._has_vifs.assert_called_once_with(self._pod)
self._get_pod_labels.assert_not_called()
self._set_pod_labels.assert_not_called()
self._get_pod_info.assert_not_called()
self._set_pod_info.assert_not_called()
def test_on_present_no_labels(self):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
def test_on_present_no_labels(self, m_get_services):
self._has_vifs.return_value = True
self._get_pod_labels.return_value = None
self._get_pod_info.return_value = None, None
p_label.PodLabelHandler.on_present(self._handler, self._pod)
self._has_vifs.assert_called_once_with(self._pod)
self._get_pod_labels.assert_called_once_with(self._pod)
self._set_pod_labels.assert_not_called()
self._get_pod_info.assert_called_once_with(self._pod)
self._set_pod_info.assert_not_called()
def test_on_present_no_changes(self):
self._has_vifs.return_value = True
pod_with_label = self._pod.copy()
pod_with_label['metadata']['labels'] = {'test1': 'test'}
self._get_pod_labels.return_value = {'test1': 'test'}
pod_with_label['status']['podIP'] = '192.168.0.1'
self._get_pod_info.return_value = ({'test1': 'test'}, '192.168.0.1')
p_label.PodLabelHandler.on_present(self._handler, pod_with_label)
self._has_vifs.assert_called_once_with(pod_with_label)
self._get_pod_labels.assert_called_once_with(pod_with_label)
self._set_pod_labels.assert_not_called()
self._get_pod_info.assert_called_once_with(pod_with_label)
self._set_pod_info.assert_not_called()

View File

@ -21,14 +21,15 @@ from kuryr_kubernetes.tests import base as test_base
class TestPolicyHandler(test_base.TestCase):
def setUp(self):
@mock.patch.object(drivers.NetworkPolicyDriver, 'get_instance')
@mock.patch('kuryr_kubernetes.clients.get_kubernetes_client')
def setUp(self, m_get_k8s, m_get_np):
super(TestPolicyHandler, self).setUp()
self._project_id = mock.sentinel.project_id
self._policy_name = 'np-test'
self._policy_uid = mock.sentinel.policy_uid
self._policy_link = mock.sentinel.policy_link
self._pod_sg = mock.sentinel.pod_sg
self._policy = {
'apiVersion': 'networking.k8s.io/v1',
@ -50,198 +51,31 @@ class TestPolicyHandler(test_base.TestCase):
}
}
self._handler = mock.MagicMock(spec=policy.NetworkPolicyHandler)
self.k8s = mock.Mock()
m_get_k8s.return_value = self.k8s
self.m_get_k8s = m_get_k8s
self._handler._drv_project = mock.Mock(
spec=drivers.NetworkPolicyProjectDriver)
self._handler._drv_policy = mock.MagicMock(
spec=drivers.NetworkPolicyDriver)
self._handler._drv_pod_sg = mock.Mock(
spec=drivers.PodSecurityGroupsDriver)
self._handler._drv_svc_sg = mock.Mock(
spec=drivers.ServiceSecurityGroupsDriver)
self._handler._drv_vif_pool = mock.MagicMock(
spec=drivers.VIFPoolDriver)
self._handler._drv_lbaas = mock.Mock(
spec=drivers.LBaaSDriver)
self.np_driver = mock.Mock()
m_get_np.return_value = self.np_driver
self._m_get_np = m_get_np
self._get_project = self._handler._drv_project.get_project
self._get_project.return_value = self._project_id
self._get_security_groups = (
self._handler._drv_pod_sg.get_security_groups)
self._set_vifs_driver = self._handler._drv_vif_pool.set_vif_driver
self._set_vifs_driver.return_value = mock.Mock(
spec=drivers.PodVIFDriver)
self._update_vif_sgs = self._handler._drv_vif_pool.update_vif_sgs
self._update_vif_sgs.return_value = None
self._update_lbaas_sg = self._handler._drv_lbaas.update_lbaas_sg
self._update_lbaas_sg.return_value = None
self._remove_sg = self._handler._drv_vif_pool.remove_sg_from_pools
self._remove_sg.return_value = None
self.handler = policy.NetworkPolicyHandler()
def _get_knp_obj(self):
knp_obj = {
'apiVersion': 'openstack.org/v1',
'kind': 'KuryrNetPolicy',
'metadata': {
'name': 'np-test-network-policy',
'namespace': 'test-1'
},
'spec': {
'securityGroupId': 'c1ac16f5-e198-4628-9d84-253c6001be8e',
'securityGroupName': 'sg-test-network-policy'
}}
return knp_obj
def test_init(self):
self.m_get_k8s.assert_called_once()
self._m_get_np.assert_called_once()
@mock.patch.object(drivers.LBaaSDriver, 'get_instance')
@mock.patch.object(drivers.ServiceSecurityGroupsDriver, 'get_instance')
@mock.patch.object(drivers.PodSecurityGroupsDriver, 'get_instance')
@mock.patch.object(drivers.VIFPoolDriver, 'get_instance')
@mock.patch.object(drivers.NetworkPolicyDriver, 'get_instance')
@mock.patch.object(drivers.NetworkPolicyProjectDriver, 'get_instance')
def test_init(self, m_get_project_driver, m_get_policy_driver,
m_get_vif_driver, m_get_pod_sg_driver, m_get_svc_sg_driver,
m_get_lbaas_driver):
handler = policy.NetworkPolicyHandler()
self.assertEqual(self.np_driver, self.handler._drv_policy)
self.assertEqual(self.k8s, self.handler.k8s)
m_get_project_driver.assert_called_once()
m_get_policy_driver.assert_called_once()
m_get_vif_driver.assert_called_once()
m_get_pod_sg_driver.assert_called_once()
m_get_svc_sg_driver.assert_called_once()
m_get_lbaas_driver.assert_called_once()
def test_on_finalize(self):
self.handler.on_finalize(self._policy)
self.np_driver.release_network_policy.assert_called_once_with(
self._policy)
self.assertEqual(m_get_project_driver.return_value,
handler._drv_project)
self.assertEqual(m_get_policy_driver.return_value, handler._drv_policy)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.is_host_network')
def test_on_present(self, m_host_network, m_get_services):
modified_pod = mock.sentinel.modified_pod
match_pod = mock.sentinel.match_pod
m_host_network.return_value = False
knp_on_ns = self._handler._drv_policy.knps_on_namespace
knp_on_ns.return_value = True
namespaced_pods = self._handler._drv_policy.namespaced_pods
ensure_nw_policy = self._handler._drv_policy.ensure_network_policy
ensure_nw_policy.return_value = [modified_pod]
affected_pods = self._handler._drv_policy.affected_pods
affected_pods.return_value = [match_pod]
sg1 = [mock.sentinel.sg1]
sg2 = [mock.sentinel.sg2]
self._get_security_groups.side_effect = [sg1, sg2]
m_get_services.return_value = {'items': []}
policy.NetworkPolicyHandler.on_present(self._handler, self._policy)
namespaced_pods.assert_not_called()
ensure_nw_policy.assert_called_once_with(self._policy,
self._project_id)
affected_pods.assert_called_once_with(self._policy)
calls = [mock.call(modified_pod, self._project_id),
mock.call(match_pod, self._project_id)]
self._get_security_groups.assert_has_calls(calls)
calls = [mock.call(modified_pod, sg1), mock.call(match_pod, sg2)]
self._update_vif_sgs.assert_has_calls(calls)
self._update_lbaas_sg.assert_not_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.is_host_network')
def test_on_present_without_knps_on_namespace(self, m_host_network,
m_get_services):
modified_pod = mock.sentinel.modified_pod
match_pod = mock.sentinel.match_pod
m_host_network.return_value = False
ensure_nw_policy = self._handler._drv_policy.ensure_network_policy
ensure_nw_policy.return_value = [modified_pod]
affected_pods = self._handler._drv_policy.affected_pods
affected_pods.return_value = [match_pod]
sg2 = [mock.sentinel.sg2]
sg3 = [mock.sentinel.sg3]
self._get_security_groups.side_effect = [sg2, sg3]
m_get_services.return_value = {'items': []}
policy.NetworkPolicyHandler.on_present(self._handler, self._policy)
ensure_nw_policy.assert_called_once_with(self._policy,
self._project_id)
affected_pods.assert_called_once_with(self._policy)
calls = [mock.call(modified_pod, self._project_id),
mock.call(match_pod, self._project_id)]
self._get_security_groups.assert_has_calls(calls)
calls = [mock.call(modified_pod, sg2),
mock.call(match_pod, sg3)]
self._update_vif_sgs.assert_has_calls(calls)
self._update_lbaas_sg.assert_not_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.is_host_network')
def test_on_present_with_services(self, m_host_network, m_get_services):
modified_pod = mock.sentinel.modified_pod
match_pod = mock.sentinel.match_pod
m_host_network.return_value = False
self._handler._is_egress_only_policy.return_value = False
self._handler._is_service_affected.return_value = True
knp_on_ns = self._handler._drv_policy.knps_on_namespace
knp_on_ns.return_value = True
namespaced_pods = self._handler._drv_policy.namespaced_pods
ensure_nw_policy = self._handler._drv_policy.ensure_network_policy
ensure_nw_policy.return_value = [modified_pod]
affected_pods = self._handler._drv_policy.affected_pods
affected_pods.return_value = [match_pod]
sg1 = [mock.sentinel.sg1]
sg2 = [mock.sentinel.sg2]
self._get_security_groups.side_effect = [sg1, sg2]
service = {'metadata': {'name': 'service-test'},
'spec': {'selector': mock.sentinel.selector}}
m_get_services.return_value = {'items': [service]}
policy.NetworkPolicyHandler.on_present(self._handler, self._policy)
namespaced_pods.assert_not_called()
ensure_nw_policy.assert_called_once_with(self._policy,
self._project_id)
affected_pods.assert_called_once_with(self._policy)
calls = [mock.call(modified_pod, self._project_id),
mock.call(match_pod, self._project_id)]
self._get_security_groups.assert_has_calls(calls)
calls = [mock.call(modified_pod, sg1), mock.call(match_pod, sg2)]
self._update_vif_sgs.assert_has_calls(calls)
self._handler._is_service_affected.assert_called_once_with(
service, [modified_pod, match_pod])
self._update_lbaas_sg.assert_called_once()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.is_host_network')
def test_on_deleted(self, m_host_network, m_get_services):
namespace_pod = mock.sentinel.namespace_pod
match_pod = mock.sentinel.match_pod
m_host_network.return_value = False
affected_pods = self._handler._drv_policy.affected_pods
affected_pods.return_value = [match_pod]
get_knp_crd = self._handler._drv_policy.get_kuryrnetpolicy_crd
knp_obj = self._get_knp_obj()
get_knp_crd.return_value = knp_obj
sg1 = [mock.sentinel.sg1]
sg2 = [mock.sentinel.sg2]
self._get_security_groups.side_effect = [sg1, sg2]
m_get_services.return_value = {'items': []}
release_nw_policy = self._handler._drv_policy.release_network_policy
knp_on_ns = self._handler._drv_policy.knps_on_namespace
knp_on_ns.return_value = False
ns_pods = self._handler._drv_policy.namespaced_pods
ns_pods.return_value = [namespace_pod]
policy.NetworkPolicyHandler.on_deleted(self._handler, self._policy)
release_nw_policy.assert_called_once_with(knp_obj)
self._get_security_groups.assert_called_once_with(match_pod,
self._project_id)
self._update_vif_sgs.assert_called_once_with(match_pod, sg1)
self._update_lbaas_sg.assert_not_called()
self._remove_sg.assert_called_once()
def test_on_present(self):
self.handler.on_present(self._policy)
self.k8s.add_finalizer.assert_called_once_with(
self._policy, 'kuryr.openstack.org/networkpolicy-finalizer')
self.np_driver.ensure_network_policy.assert_called_once_with(
self._policy)

View File

@ -104,7 +104,7 @@ kuryr_kubernetes.controller.handlers =
namespace = kuryr_kubernetes.controller.handlers.namespace:NamespaceHandler
policy = kuryr_kubernetes.controller.handlers.policy:NetworkPolicyHandler
pod_label = kuryr_kubernetes.controller.handlers.pod_label:PodLabelHandler
kuryrnetpolicy = kuryr_kubernetes.controller.handlers.kuryrnetpolicy:KuryrNetPolicyHandler
kuryrnetworkpolicy = kuryr_kubernetes.controller.handlers.kuryrnetworkpolicy:KuryrNetworkPolicyHandler
kuryrnetwork = kuryr_kubernetes.controller.handlers.kuryrnetwork:KuryrNetworkHandler
kuryrnetwork_population = kuryr_kubernetes.controller.handlers.kuryrnetwork_population:KuryrNetworkPopulationHandler
test_handler = kuryr_kubernetes.tests.unit.controller.handlers.test_fake_handler:TestHandler

View File

@ -38,6 +38,7 @@ sudo chown ${USER}:${USER} ${HOME}/.kube/config
/usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get endpoints -o yaml --all-namespaces >> ${K8S_LOG_DIR}/endpoints.txt
/usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get kuryrnetpolicy -o yaml --all-namespaces >> ${K8S_LOG_DIR}/kuryrnetpolicy_crds.txt
/usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get kuryrport -o yaml --all-namespaces >> ${K8S_LOG_DIR}/kuryrport_crds.txt
/usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get kuryrnetworkpolicy -o yaml --all-namespaces >> ${K8S_LOG_DIR}/kuryrnetworkpolicy_crds.txt
# Kubernetes pods logs
mkdir -p ${K8S_LOG_DIR}/pod_logs
while read -r line