One of the best features about OpenShift and Kubernetes is the ability to quickly and easily create and destroy resources. This scenario is ideal during phases of a Continuous Integration and Continuous Delivery (CI/CD) pipeline, which may need a temporary environment to perform integration tests. A typical process includes the creation of namespaces, population of resources to support the testing process, and then, finally, the removal of the resources and namespace once the testing phase has completed. However, there are occasions where failures occur during the removal of the temporary namespace where you are met by the following:

NAME                STATUS        AGE

finalizer-example   Terminating   15m

Waiting for the issue to resolve in many cases becomes a fruitless endeavor. A quick online search for a solution yields several similar results that walks through the following steps:

  1. Dumping the Namespace resource to a file.
  2. Removing the default “kubernetes” finalizer (along with any others that may be present).
  3. Executing a request against the /finalize endpoint for the namespace.

Re-running the query to assess whether the namespace is still in a terminating status should return an empty result. While it appears that the issue has been resolved, there are serious consequences with this approach, which not only presents a concern surrounding cluster integrity, but also introduces a potential security issue. This article will provide an overview of why namespaces appear to become stuck in a terminating state, the issues with forcefully finalizing a namespace, and approaches for resolving the underlying cause of a terminating namespace.

Controllers are one of the foundational components of Kubernetes whose job is to constantly monitor (through a control loop) the defined API resources in order to bring the cluster to the desired state. Each controller has a designed purpose that manages the entire lifecycle of a particular component. An important concept to remember with any cloud native technology is that availability is not guaranteed. If a controller was designed to take action when a resource was deleted and the controller was unavailable at that point in time, the intended action would not occur and state would no longer be in sync. This type of pattern became more prevalent through the introduction of Custom Resource Definitions (CRD), which gives end users the ability to define their own resource in the Kubernetes API. For example, if a controller was designed to deploy a component when an API resource was created when the API resource was deleted, the controller was unavailable or missed the deletion event, the deployed resource would not have been deleted and would remain in an orphaned state. To mitigate this type of issue, the concept of finalizers was developed. A finalizer is a mechanism for implementing a pre-deletion hook to ensure that desired tasks are executed prior to the removal of a resource. This is implemented through the presence of an array of strings on a resource. As long as a value is present in the finalizers field, the resource will not be deleted. The controller managing the resource is responsible for removing the associated finalizer field once it has completed its desired action which will allow for the deletion of the resource to proceed. In some cases, the controller may not be able to complete the set of designed tasks and remove the field from the array of finalizers (such as if it was removed or unavailable). This causes the resource to be put in a terminating state awaiting the removal of the resource, which will never occur.

Simulating a Terminating Namespace

With an understanding of the intended function of finalizers, let’s simulate a scenario which causes a namespace to be placed in the terminating state along with the potential repercussions that can occur as a result.

To start, have an OpenShift or Kubernetes environment available and be authenticated to the cluster.

Create a new namespace called finalizer-example:

$ oc new-project finalizer-example

Next, create a secret called test-secret in a file called test-secret.yaml which includes some amount of sensitive information:

apiVersion: v1
kind: Secret
metadata:
 name: test-secret
 finalizers:
   - kubernetes.io/example-finalizer
stringData:
 sensitiveKey: sensitiveValue

Notice the inclusion of the finalizers field and the presence will ensure that the Secret is not removed until the array finalizers field is empty. The value in the finalizers field is typically associated with the controller responsible for managing the lifecycle of the resource. Since there is no controller monitoring the resource, it will never be deleted.

Create the secret by executing the following command:

$ oc create -f test-secret.yaml

Now, let’s attempt to delete the namespace:

$ oc delete namespace finalizer-example --wait=false

namespace "finalizer-example" deleted

The --wait=false parameter in the prior command was included to avoid blocking the command from returning after the resource was successfully deleted. While the command does in fact return with the message indicating that the namespace was deleted, querying for the namespace will indicate that it is in a terminating state awaiting the finalizer to be removed.

$ oc get namespace finalizer-example

NAME                STATUS        AGE

finalizer-example   Terminating   15m

Removing a Terminating Namespace

As described previously, one of the approaches for terminating a namespace dumps the content of the namespace to a file, requires the user to manually remove any finalizers that may be present, and then finally executing a request against the Kubernetes API. Let’s use this approach on the finalizer-example which is currently stuck terminating.

First, get the content of the namespace and send it to a file called finalizer-example-ns.yaml

$ oc get namespace finalizer-example -o json > finalizer-example-ns.json

The contents of the file will appear similar to the following (only the relevant sections are displayed):

{
   "apiVersion": "v1",
   "kind": "Namespace",
   "metadata": {
       ...
       "name": "finalizer-example",
       ...
   },
   "spec": {
       "finalizers": [
           "kubernetes"
       ]
   },
   "status": {
       ...
   }
}

Remove the contents of the finalizers array resulting in a similar similar to the following:

{
   "apiVersion": "v1",
   "kind": "Namespace",
   "metadata": {
       ...
       "name": "finalizer-example",
       ...
   },
   "spec": {
       "finalizers": []
   },
   "status": {
       ...
   }
}

Recent versions of Kubernetes and OpenShift have enhanced the status field of the Namespace resource to include additional details into the cause of the terminating namespace. Those on versions that support this functionality will notice the following condition:

{
   "apiVersion": "v1",
   "kind": "Namespace",
   "metadata": {
       ...
       "name": "finalizer-example",
       ...
   },
   "spec": {
       ...
   },
   "status": {
       ...
       "conditions": [
           ...
           {
               "lastTransitionTime": "2020-08-01T19:51:59Z",
               "message": "Some resources are remaining: secrets. has 1 resource instances",
               "reason": "SomeResourcesRemain",
               "status": "True",
               "type": "NamespaceContentRemaining"
           },
           {
               "lastTransitionTime": "2020-08-01T19:51:59Z",
               "message": "Some content in the namespace has finalizers remaining: kubernetes.io/example-finalizer in 1 resource instances",
               "reason": "SomeFinalizersRemain",
               "status": "True",
               "type": "NamespaceFinalizersRemaining"
           }
       ],
       "phase": "Terminating"
   }
}

Note that these two conditions indicate the resource and the name of the finalizer that is blocking the deletion namespace of the namespace.

Regardless of whether we are able to determine the resource responsible for holding up the deletion of a namespace, let's move forward and demonstrate the issues that are caused when forcibly terminating a namespace by taking the updated namespace content file modified previously and executing a request against the /finalize endpoint of the namespace:

$ curl -k -H "Content-Type: application/json" -H "Authorization: Bearer $(oc whoami -t)" -X PUT --data-binary @finalizer-example-ns.json  https://$(oc whoami --show-server)/api/v1/namespaces/finalizer-example/finalize

Confirm the namespace is no longer present in the cluster:

$ oc get namespace finalizer-example

Error from server (NotFound): namespaces "finalizer-example" not found

Understanding the Dangers of Forcefully Terminating a Namespace

While it would appear that there are no ill effects of the terminating namespace in the cluster, let’s illustrate the implications of this action. One of the traits of OpenShift is that it fully supports multi-tenancy both from an Role-based access control (RBAC) standpoint as well as at a network layer. Imagine multiple teams are making use of this current cluster and Team A was responsible for executing the scenario described in the prior section. There is no desire from Team A to use the finalizer-example namespace which was the impetus for deleting it in the first place. Now imagine Team B, a completely independent entity operating in this multitenant environment, also has a desire to create a namespace called finalizer-example and populate the newly created namespace with content.

Create the finalizer-example namespace once again impersonating the actions that would be taken by Team B:

$ oc new-project finalizer-example

Ironically, Team B also has a desire to create a secret called test-secret to store content needed by their application.

Create a file called test-secret-teamb.yaml with the following content:

apiVersion: v1
kind: Secret
metadata:
 name: test-secret
stringData:
 teamBDatabasePassword: DBPassword1234

Add the secret to the cluster:

$ oc create -f test-secret-teamb.yaml

Error from server (AlreadyExists): error when creating "test-secret-teamb.yaml": object is being deleted: secrets "test-secret" already exists

Whoa! The secret cannot be created because another secret called test-secret already exists. How can this be? This is a brand new namespace?

By now, it is increasingly clear that this is the same test-secret secret that Team A created in the prior incarnation of the finalizer-example namespace. Not only has the stability of the cluster been affected, multi-tenancy has also been compromised as Team B can inspect the content of the test-secret secret and retrieve potentially sensitive information pertaining to Team A.

$ oc get secrets test-secret -o yaml

apiVersion: v1
data:
 sensitiveKey: c2Vuc2l0aXZlVmFsdWU=
kind: Secret
metadata:
 ...
 finalizers:
 - kubernetes.io/example-finalizer
 ...
 name: test-secret
 namespace: finalizer-example
 ...
type: Opaque

As with any Kubernetes secret, the value of sensitiveKey is merely base64 encoded and can be decoded with ease. Otherwise, you can use the oc extract command to print out the value contained in the secret:

$ oc extract secret/test-secret --to=-

# sensitiveKey
sensitiveValue

Determining the Cause of a Namespace in a Terminating State

With an understanding of some of the consequences of forcibly terminating a namespace, let's provide several approaches for determining the underlying cause resulting in the namespace becoming stuck in a terminating state.

Currently, the finalizer-example is fully operational and in use by Team B. Delete the namespace once again to place it in a Terminating state in order for us to begin to address the root cause.

$ oc delete namespace finalizer-example --wait=false

namespace "finalizer-example" deleted

As we have seen, the most common reason is that a resource has a finalizer defined. However, determining the resource in question is not as simple as it once was in the early days of Kubernetes. With the proliferation of Custom Resource Definitions, the number of available resources in a Kubernetes cluster has grown exponentially. One might be tempted to believe that executing “oc get all” will retrieve all of the resources in a given namespace, the command is unfortunately programmed to return only a few of the most commonly used resources. Instead, alternate approaches must be used.

Searching the API Server

The first approach is to query the API server for all of the defined registered resources to locate one that is contained within the finalizer-example namespace. The oc api-resources command provides an easy way to not only view the available resources, but also query whether they may be located within a namespace.

Execute the following command to search the API for a resource in the finalizer-example namespace:

$ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n finalizer-example

NAME                 TYPE     DATA   AGE
secret/test-secret   Opaque   1      164m

Searching etcd

Etcd is a key/value datastore and includes all cluster data within a Kubernetes environment. An alternate method to searching the API is to query the backing store directly. Like the majority of OpenShift’s infrastructure, etcd runs in a pod within the openshift-etcd namespace. Since etcd is part of the core infrastructure of Kubernetes, elevated rights are required in order to access the associated resources. Execute the following command to start a terminal session within one of the available etcd pods:

$ oc rsh -n openshift-etcd $(oc get pods -n openshift-etcd -o=jsonpath='{.items[0].metadata.name}')

Once inside the pod, execute the following query, which will search for all keys contained within the finalizer-example namespace and locate any resources within the namespace:

$ for r in `etcdctl get / --prefix --keys-only | grep "^/.*/.*/finalizer-example/.*"`; do echo "Resource: '$(echo $r | cut -d"/" -f 3)' - Name: '$(echo $r | cut -d"/" -f 5)'"; done

Resource: 'secrets' - Name: 'test-secret'

On OpenShift 3.11, use the following set of commands:

Start a terminal session within the etcd pod:

$ oc rsh -n kube-system $(oc get pods -n kube-system -l=openshift.io/component=etcd -o=jsonpath='{.items[0].metadata.name}')

Locate any remaining resources in the namespace:

$ for r in `ETCDCTL_API=3 etcdctl --cert /etc/etcd/peer.crt --key /etc/etcd/peer.key --cacert /etc/etcd/ca.crt --endpoints=$(cat /etc/etcd/etcd.conf | grep ETCD_ADVERTISE_CLIENT_URLS | cut -d '/' -f3) get / --prefix --keys-only | grep "^/.*/.*/finalizer-example/.*"`; do echo "Resource: '$(echo $r | cut -d"/" -f 3)' - Name: '$(echo $r | cut -d"/" -f 5)'"; done

Exit out of the etcd pod once complete.

$ exit

Orphaned APIServices

In addition to the API resources that reside in the Kubernetes API Server, one of the many benefits of Kubernetes is the ability to define a separate API Server that is automatically integrated into the core API Server through an aggregation later. An APIService resource provides the integration between these two components, and as resources are garbage collected through the termination of a namespace, the backing API Server may be deleted prior to the unregistration of the APIService. A good indication of this type of failure would have been seen in a prior section when searching for API resources within a namespace using the oc api-resources command as depicted below.

Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: custom.metrics.k8s.io/v1beta1: the server is currently unable to handle the request

As the backing API Server associated with the APIService is no longer present, Kubernetes is unable to accurately determine the full range of API resources in the cluster.

To pinpoint whether an orphaned APIService is the cause of the terminating namespace, list all of the registered APIServices in the cluster:

$ oc get APIServices

NAME                            SERVICE                  AVAILABLE   AGE
v1.                             Local                    True        128m
v1.admissionregistration.k8s.io Local                    True        128m
v1.apiextensions.k8s.io         Local                    True        128m
v1.apps                         Local                    True        128m
v1.apps.openshift.io            openshift-apiserver/api  True        115m
v1.authentication.k8s.io        Local                    True        128m
v1.authorization.k8s.io         Local                    True        128m
v1.authorization.openshift.io   openshift-apiserver/api  True        115m
...

If any of the APIServices are reporting their availability status as False, they could be the cause of the namespace failing to properly terminate. Prior to removing the APIService, attempt to determine whether the APIService may have been associated with resources deployed to that namespace. Finally remove the offending APIService using the following command:

$ oc delete APIServices <name>

If the APIService was the root cause of the terminating namespace issue, the subsequent section may not be necessary but will provide a thorough explanation of how to handle removing other types of resources preventing the termination of a namespace.

Safely Removing Resources

With an understanding of the various methods of identifying resources that are restricting a namespace from being deleted properly, let’s attempt to remove the offending resources. In most cases (including the “test-secret” secret in the finalizer-example namespace), the root cause is the presence of a finalizer, which prevents garbage collection. Since we are confident that forcibly removing a secret will cause no additional harm to the cluster, execute the following command to remove the finalizer:

$ oc patch secret test-secret -n finalizer-example -p '{"metadata":{"finalizers":[]}}' --type=merge

secret/test-secret patched

With the secret patched, Kubernetes will be able to perform garbage collection on the secret, and once the secret has been deleted, the namespace would also be deleted.

$ oc get secret test-secret

Error from server (NotFound): namespaces "finalizer-example" not found

As expected, both the secret and the namespace were successfully deleted.

By properly investigating the underlying issue resulting in the namespace remaining in a terminating state, we avoided any potential security concerns caused by orphaned resources and ensured the overall integrity of the cluster. Additional details on how to investigate terminating namespaces along with several troubleshooting steps can also be found in this Red Hat Knowledgebase article.


About the author

Andrew Block is a Distinguished Architect at Red Hat, specializing in cloud technologies, enterprise integration and automation.

Read full bio