Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl apply's get /namespaces/{} output is disregarded #1619

Open
mfranzil opened this issue Jul 4, 2024 · 8 comments
Open

kubectl apply's get /namespaces/{} output is disregarded #1619

mfranzil opened this issue Jul 4, 2024 · 8 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@mfranzil
Copy link

mfranzil commented Jul 4, 2024

What happened?

Every time kubectl apply is used, instead of the direct API call, kubectl automatically evaluates the correct command to be used (e.g., a patch for an existent object, a create for a new one). To do so, usually kubectl runs the following commands (assume I am manipulating a ServiceAccount for simplicity)

  • GET to /openapi/v3/ and /openapi/v3/api/{etc..}
  • GET /api/v1/namespaces/{namespace}/serviceaccounts/{my_sa} - here the decision is taken whether to call additional APIs; if so,
  • GET /api/v1/namespaces/{namespace}/
  • POST/PATCH/etc...

The problem is, the GET to the namespace object is completely irrelevant to the calls made after: even if the API call reports a 403, or 404 (extreme corner case in which the namespace was deleted in the middle), kubectl mindlessly goes forward with the following call. This leads me to think that this GET to the namespace is irrelevant, and could be skipped.

What did you expect to happen?

Either the GET call not to be present or to have some sort of role in the apply process.

How can we reproduce it (as minimally and precisely as possible)?

  • Create a ClusterRole that can get and create serviceaccounts, but has no access to namespaces.
  • Bind the ClusterRole to any kind of authenticated User/SA of your choice.
  • Try to create a ServiceAccount with kubectl apply, put a non-existent name and an existent namespace:
$ kubectl --v=6 apply -f - <<EOF                                                                                                                                                                                                         
apiVersion: v1
kind: ServiceAccount
metadata:
  name: non-existent-sa
  namespace: default
EOF
  • Inspect the API calls, you will find a GET https://[redacted]:6443/api/v1/namespaces/default 403 Forbidden. Yet, the POST goes forward.

  • Even without a clusterrole, try creating a ServiceAccount on a non-existent namespace:

$ kubectl --v=6 apply -f - <<EOF                                                                                                                                                                                                         
apiVersion: v1
kind: ServiceAccount
metadata:
  name: non-existent-sa
  namespace: your-sa
EOF
  • Here are the API calls, in which even after the 404 on the your-sa namespace apply goes forward:
I0704 13:17:56.795130   49554 round_trippers.go:553] GET https://[redacted]:6443/openapi/v3?timeout=32s 200 OK in 20 milliseconds
I0704 13:17:56.800759   49554 round_trippers.go:553] GET https://[redacted]:6443/openapi/v3/api/v1?hash=[redacted]&timeout=32s 200 OK in 3 milliseconds
I0704 13:17:56.834894   49554 round_trippers.go:553] GET https://[redacted]:6443/api/v1/namespaces/your-sa/serviceaccounts/non-existent-sa 404 Not Found in 5 milliseconds
I0704 13:17:56.840224   49554 round_trippers.go:553] GET https://[redacted]:6443/api/v1/namespaces/your-sa 404 Not Found in 5 milliseconds
I0704 13:17:56.900061   49554 round_trippers.go:553] POST https://[redacted]:6443/api/v1/namespaces/your-sa/serviceaccounts?fieldManager=kubectl-client-side-apply&fieldValidation=Strict 404 Not Found in 59 milliseconds

Anything else we need to know?

No response

Kubernetes version

$ k version
Client Version: v1.28.6
Server Version: v1.28.7

Cloud provider

None (self-managed kubeadm instance)

OS version

$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ uname -a
Linux kubeadm-master 5.15.0-107-generic kubernetes/kubernetes#117-Ubuntu SMP Fri Apr 26 12:26:49 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Install tools

None

Container runtime (CRI) and version (if applicable)

$ sudo ctr version
[...]
Server:
  Version:  1.7.18
  Revision: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
  UUID: bbc7ef22-56e4-4989-8119-897b2d6efe00

Related plugins (CNI, CSI, ...) and versions (if applicable)

Flannel is used, but is irrelevant.
@mfranzil mfranzil added the kind/bug Categorizes issue or PR as related to a bug. label Jul 4, 2024
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 4, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jul 4, 2024
@mfranzil
Copy link
Author

mfranzil commented Jul 4, 2024

/sig api-machinery

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 4, 2024
@sftim
Copy link
Contributor

sftim commented Jul 5, 2024

/sig cli
/remove-sig api-machinery

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Jul 5, 2024
@sftim
Copy link
Contributor

sftim commented Jul 5, 2024

This might be a client-go thing but I think kubectl should triage it first.
/transfer kubectl

@k8s-ci-robot k8s-ci-robot transferred this issue from kubernetes/kubernetes Jul 5, 2024
@ardaguclu
Copy link
Member

Did you test the same steps with different resource (e.g. pods, etc.) other than serviceaccounts?. Because there might be an explicitly defined behavior specifically for serviceaccounts. If this happens other resources as well, I think it requires investigation.

@mfranzil
Copy link
Author

mfranzil commented Jul 11, 2024

Did you test the same steps with different resource (e.g. pods, etc.) other than serviceaccounts?. Because there might be an explicitly defined behavior specifically for serviceaccounts. If this happens other resources as well, I think it requires investigation.

Sure. With Secrets (forgive the censorship on the IP):

❯ cat <<EOF | kubectl --v=6 apply -f -
  apiVersion: v1
  kind: Secret
  metadata:
    name: my-secret-2
    namespace: non-existent-namespace
  type: Opaque
  data:
    username: $(echo -n "admin" | base64)
    password: $(echo -n "password" | base64)
EOF
I0711 09:23:23.865961    6862 loader.go:395] Config loaded from file:  /Users/matte/.kube/config
I0711 09:23:23.905814    6862 round_trippers.go:553] GET https://{redacted}/openapi/v3?timeout=32s 200 OK in 37 milliseconds
I0711 09:23:23.913093    6862 round_trippers.go:553] GET https://{redacted}/openapi/v3/api/v1?hash=69A5FB5D660CEEF2279ECAAB7A1E695E444D2BBD6BA644440A1810682FB8066FEB458AC107AEF5F1D7E036688E35339BB8F8F2C9FC97F8F345BDA83D1DAE147C&timeout=32s 200 OK in 3 milliseconds
I0711 09:23:23.942042    6862 round_trippers.go:553] GET https://{redacted}/api?timeout=32s 200 OK in 3 milliseconds
I0711 09:23:23.948789    6862 round_trippers.go:553] GET https://{redacted}/apis?timeout=32s 200 OK in 5 milliseconds
I0711 09:23:23.963417    6862 round_trippers.go:553] GET https://{redacted}/api/v1/namespaces/non-existent-namespace/secrets/my-secret-2 404 Not Found in 7 milliseconds
I0711 09:23:23.969000    6862 round_trippers.go:553] GET https://{redacted}/api/v1/namespaces/non-existent-namespace 404 Not Found in 5 milliseconds
I0711 09:23:24.032587    6862 round_trippers.go:553] POST https://{redacted}/api/v1/namespaces/non-existent-namespace/secrets?fieldManager=kubectl-client-side-apply&fieldValidation=Strict 404 Not Found in 63 milliseconds
I0711 09:23:24.032780    6862 helpers.go:246] server response object: [{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "error when creating \"STDIN\": namespaces \"non-existent-namespace\" not found",
  "reason": "NotFound",
  "details": {
    "name": "non-existent-namespace",
    "kind": "namespaces"
  },
  "code": 404
}]
Error from server (NotFound): error when creating "STDIN": namespaces "non-existent-namespace" not found

Even with Pods:

❯ cat <<EOF | kubectl apply --v=6 -f -
apiVersion: v1
kind: Pod
metadata:
  name: secret-pod-2
  namespace: non-existent-namespace
spec:
  containers:
  - name: secret-container-2
    image: alpine
    command: ["sh", "-c", 'echo Username is \$USERNAME && echo Password is \$PASSWORD && sleep 3600']
    volumeMounts:
    - name: secret-volume
      mountPath: /etc/secret
  volumes:
  - name: secret-volume
    secret:
      secretName: my-secret-2
EOF
I0711 09:26:32.252942   17981 loader.go:395] Config loaded from file:  /Users/matte/.kube/config
I0711 09:26:32.282183   17981 round_trippers.go:553] GET https://{redacted}/openapi/v3?timeout=32s 200 OK in 21 milliseconds
I0711 09:26:32.286909   17981 round_trippers.go:553] GET https://{redacted}/openapi/v3/api/v1?hash=69A5FB5D660CEEF2279ECAAB7A1E695E444D2BBD6BA644440A1810682FB8066FEB458AC107AEF5F1D7E036688E35339BB8F8F2C9FC97F8F345BDA83D1DAE147C&timeout=32s 200 OK in 3 milliseconds
I0711 09:26:32.317249   17981 round_trippers.go:553] GET https://{redacted}/api/v1/namespaces/non-existent-namespace/pods/secret-pod-2 404 Not Found in 5 milliseconds
I0711 09:26:32.322264   17981 round_trippers.go:553] GET https://{redacted}/api/v1/namespaces/non-existent-namespace 404 Not Found in 4 milliseconds
I0711 09:26:32.387901   17981 round_trippers.go:553] POST https://{redacted}/api/v1/namespaces/non-existent-namespace/pods?fieldManager=kubectl-client-side-apply&fieldValidation=Strict 404 Not Found in 65 milliseconds
I0711 09:26:32.389187   17981 helpers.go:246] server response object: [{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "error when creating \"STDIN\": namespaces \"non-existent-namespace\" not found",
  "reason": "NotFound",
  "details": {
    "name": "non-existent-namespace",
    "kind": "namespaces"
  },
  "code": 404
}]
Error from server (NotFound): error when creating "STDIN": namespaces "non-existent-namespace" not found

@ardaguclu
Copy link
Member

/assign

@ardaguclu
Copy link
Member

@mfranzil thank you for spending time on this issue with me.

Could you please try the same steps by using --server-side in apply. I think, client side apply needs to fetch resources to stay in up to date and server side may not send extra GET request at all.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
Status: Needs Triage
Development

No branches or pull requests

4 participants