Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout on fetch cert #368

Open
palsivertsen opened this issue Mar 2, 2020 · 25 comments
Open

Timeout on fetch cert #368

palsivertsen opened this issue Mar 2, 2020 · 25 comments
Labels
backlog Issues/PRs that will be included in the project roadmap bug

Comments

@palsivertsen
Copy link

On initial installation on AWS I get the following timeout error:

$ kubeseal --fetch-cert -v 10000
I0302 16:37:04.066027   36889 loader.go:359] Config loaded from file:  /home/pal/.kube/config
I0302 16:37:04.066646   36889 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/x-pem-file, */*" -H "User-Agent: kubeseal/v0.0.0 (linux/amd64) kubernetes/$Format" 'https://REDACTED.sk1.eu-west-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem'
I0302 16:37:34.951625   36889 round_trippers.go:438] GET https://REDACTED.sk1.eu-west-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem 503 Service Unavailable in 30884 milliseconds
I0302 16:37:34.951681   36889 round_trippers.go:444] Response Headers:
I0302 16:37:34.951696   36889 round_trippers.go:447]     Audit-Id: 8f9e456d-7cd3-42e6-8871-bdd2e99608fa
I0302 16:37:34.951703   36889 round_trippers.go:447]     Date: Mon, 02 Mar 2020 15:37:34 GMT
I0302 16:37:34.951775   36889 request.go:947] Response Body: Error: 'dial tcp 10.167.172.10:8080: i/o timeout'
Trying to reach: 'http://10.167.172.10:8080/v1/cert.pem'
I0302 16:37:34.951834   36889 request.go:1150] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: invalid character 'E' looking for beginning of value
error: cannot fetch certificate: the server is currently unable to handle the request (get services http:sealed-secrets-controller:)

I applied the controller at https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.8/controller.yaml and installed a precompiled cli from https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.8/kubeseal-linux-amd64

Some additional debugging:

$ kubectl --namespace kube-system describe svc sealed-secrets-controller
Name:              sealed-secrets-controller
Namespace:         kube-system
Labels:            name=sealed-secrets-controller
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"sealed-secrets-controller"},"name":"sealed-secrets-cont...
Selector:          name=sealed-secrets-controller
Type:              ClusterIP
IP:                172.20.209.255
Port:              <unset>  8080/TCP
TargetPort:        8080/TCP
Endpoints:         10.167.172.10:8080
Session Affinity:  None
Events:            <none>

I am able to do a port forward like so:

kubectl --namespace kube-system port-forward svc/sealed-secrets-controller 8081:8080

And then curl the cert:

$ curl localhost:8081/v1/cert.pem
-----BEGIN CERTIFICATE-----
....
@mkmik
Copy link
Collaborator

mkmik commented Mar 2, 2020

what version of EKS are you running?

@palsivertsen
Copy link
Author

eks.7 with Kubernetes 1.14.

I think everything should be up to date.

@mkmik
Copy link
Collaborator

mkmik commented Mar 3, 2020

Any special RBAC settings?

@palsivertsen
Copy link
Author

Hmm, I haven't done any special setup other than what AWS documentation recommends

@mkmik
Copy link
Collaborator

mkmik commented Mar 3, 2020

Issue #317 is similar. Can you please try the troubleshooting questions presented in that issue?

In the meantime I'll try to reproduce in eks.

@palsivertsen
Copy link
Author

I'll see if I've missed something from that issue, but as far as I can see that was an issue with GKE in a private network.

How does the curl call work? Does it depend on a proxy? And how is it authenticated?

@mkmik
Copy link
Collaborator

mkmik commented Mar 3, 2020

The main conversation in that thread is about GKE, but it might be a similar issue in that it all boils down to the apiserver proxy.

When you run "kube proxy" locally, your client authenticates to the cluster as usual and opens port 8080 on localhost. Now you can curl and access any k8s api including all the http ports exposed by services.

That's the same mechanism that kubeseal uses to talk to the controller (but in-process)

@palsivertsen
Copy link
Author

Dumping some more debug:

Controller is running:

$ kubectl --namespace kube-system get po,svc,ep,rs,deploy -lname=sealed-secrets-controller
NAME                                             READY   STATUS    RESTARTS   AGE
pod/sealed-secrets-controller-84fcdcd5fd-zflkp   1/1     Running   0          19h

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/sealed-secrets-controller   ClusterIP   172.20.209.255   <none>        8080/TCP   19h

NAME                                  ENDPOINTS            AGE
endpoints/sealed-secrets-controller   10.167.172.10:8080   19h

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.extensions/sealed-secrets-controller-84fcdcd5fd   1         1         1       19h

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/sealed-secrets-controller   1/1     1            1           19h

Component descriptions:

$ kubectl --namespace kube-system describe po,svc,ep,rs,deploy -lname=sealed-secrets-controller
Name:           sealed-secrets-controller-84fcdcd5fd-zflkp
Namespace:      kube-system
Priority:       0
Node:           ip-10-0-59-167.eu-west-1.compute.internal/10.0.59.167
Start Time:     Mon, 02 Mar 2020 14:49:26 +0100
Labels:         name=sealed-secrets-controller
                pod-template-hash=84fcdcd5fd
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Running
IP:             10.167.172.10
IPs:            <none>
Controlled By:  ReplicaSet/sealed-secrets-controller-84fcdcd5fd
Containers:
  sealed-secrets-controller:
    Container ID:  docker://39899e059437607179c004332f54ebb26990441d263f4838a2b6f71d8a671820
    Image:         quay.io/bitnami/sealed-secrets-controller:v0.9.8
    Image ID:      docker-pullable://quay.io/bitnami/sealed-secrets-controller@sha256:b62e3de0dc2c714b5794c21cecf33e227b4f6d09a5e0aaf04b7808dd1642cca4
    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      controller
    State:          Running
      Started:      Mon, 02 Mar 2020 14:49:31 +0100
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from sealed-secrets-controller-token-f77g7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  sealed-secrets-controller-token-f77g7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sealed-secrets-controller-token-f77g7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>


Name:              sealed-secrets-controller
Namespace:         kube-system
Labels:            name=sealed-secrets-controller
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"sealed-secrets-controller"},"name":"sealed-secrets-cont...
Selector:          name=sealed-secrets-controller
Type:              ClusterIP
IP:                172.20.209.255
Port:              <unset>  8080/TCP
TargetPort:        8080/TCP
Endpoints:         10.167.172.10:8080
Session Affinity:  None
Events:            <none>


Name:         sealed-secrets-controller
Namespace:    kube-system
Labels:       name=sealed-secrets-controller
Annotations:  <none>
Subsets:
  Addresses:          10.167.172.10
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    <unset>  8080  TCP

Events:  <none>


Name:           sealed-secrets-controller-84fcdcd5fd
Namespace:      kube-system
Selector:       name=sealed-secrets-controller,pod-template-hash=84fcdcd5fd
Labels:         name=sealed-secrets-controller
                pod-template-hash=84fcdcd5fd
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/sealed-secrets-controller
Replicas:       1 current / 1 desired
Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           name=sealed-secrets-controller
                    pod-template-hash=84fcdcd5fd
  Service Account:  sealed-secrets-controller
  Containers:
   sealed-secrets-controller:
    Image:      quay.io/bitnami/sealed-secrets-controller:v0.9.8
    Port:       8080/TCP
    Host Port:  0/TCP
    Command:
      controller
    Liveness:     http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /tmp from tmp (rw)
  Volumes:
   tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
Events:         <none>


Name:                   sealed-secrets-controller
Namespace:              kube-system
CreationTimestamp:      Mon, 02 Mar 2020 14:49:26 +0100
Labels:                 name=sealed-secrets-controller
Annotations:            deployment.kubernetes.io/revision: 1
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"name":"sealed-secrets-controller"},"name":"sealed-secr...
Selector:               name=sealed-secrets-controller
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        30
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           name=sealed-secrets-controller
  Service Account:  sealed-secrets-controller
  Containers:
   sealed-secrets-controller:
    Image:      quay.io/bitnami/sealed-secrets-controller:v0.9.8
    Port:       8080/TCP
    Host Port:  0/TCP
    Command:
      controller
    Liveness:     http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /tmp from tmp (rw)
  Volumes:
   tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   sealed-secrets-controller-84fcdcd5fd (1/1 replicas created)
Events:          <none>
$ kubectl proxy
....
$ curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem
Error: 'dial tcp 10.167.172.10:8080: i/o timeout'
Trying to reach: 'http://10.167.172.10:8080/v1/cert.pem'

@mkmik
Copy link
Collaborator

mkmik commented Mar 3, 2020

Can you check whether you have some Network Policies that would block the api server from talking with the port 8080 of the controller?

@palsivertsen
Copy link
Author

None

$ kubectl get --all-namespaces networkpolicies
No resources found
$ kubectl get --all-namespaces ciliumnetworkpolicies.cilium.io 
No resources found

@mkmik
Copy link
Collaborator

mkmik commented Mar 3, 2020

Yeah. I'd really like to move away from this model. I'd like to put the cert into a config-map resource or a CRD which we can then access directly via the api server

@palsivertsen
Copy link
Author

Config map sounds like a good approach.

I can store the cert locally for now. If this can be solved in future release we can close this issue.

@mkmik
Copy link
Collaborator

mkmik commented Mar 3, 2020

There is another approach detailed in #282
Both can and should coexist since they strike different tradeoffs.

I'll close this issue when the PR that implements the fix lands.

@3h4x
Copy link

3h4x commented Mar 5, 2020

In my case on GKE communication was also blocked on 8080 port. Allowing it from master CIDR solved it. It could be added to readme as a requirement.

Pasting output of kubeseal so maybe it will help people to find it faster

kubeseal --fetch-cert -v 10
I0305 15:22:02.425729   34550 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/x-pem-file, */*" -H "User-Agent: kubeseal/v0.0.0 (darwin/amd64) kubernetes/$Format" 'https://0.0.0.0/api/v1/namespaces/kube-system/services/http:sealed-secrets:/proxy/v1/cert.pem'

I0305 15:22:32.629305   34550 round_trippers.go:438] GET https://0.0.0.0/api/v1/namespaces/kube-system/services/http:sealed-secrets:/proxy/v1/cert.pem 503 Service Unavailable in 30203 milliseconds
I0305 15:22:32.629377   34550 round_trippers.go:444] Response Headers:
I0305 15:22:32.629394   34550 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0305 15:22:32.629407   34550 round_trippers.go:447]     Content-Length: 
I0305 15:22:32.629418   34550 round_trippers.go:447]     Date: 
I0305 15:22:32.629430   34550 round_trippers.go:447]     Audit-Id: 
I0305 15:22:32.629529   34550 request.go:947] Response Body: Error: 'dial tcp 192.168.0.13:8080: i/o timeout'
Trying to reach: 'http://192.168.0.13:8080/v1/cert.pem'
error: cannot fetch certificate: the server is currently unable to handle the request (get services http:sealed-secrets:)

@mkmik
Copy link
Collaborator

mkmik commented Mar 5, 2020

The GKE case at least is documented in https://github.com/bitnami-labs/sealed-secrets/blob/master/docs/GKE.md#private-gke-clusters

This is clearly causing more pain than necessary. I don't think a general solution can assume the CLI tool can talk directly with the controller.

@SerhatTeker
Copy link

SerhatTeker commented Jul 30, 2021

JFYI: I'm using provider Linode and getting same error:

$ kubeseal --fetch-cert -v 10 > kubeseal.pem

I0730 14:44:00.291228   23849 round_trippers.go:423] curl -k -v -XGET  -H "Accept: application/x-pem-file, */*" -H "User-Agent: kubeseal/v0.0.0 (linux/amd64) kubernetes/$Format" -H "Authorization: Bearer blabla/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem'
I0730 14:44:30.974004   23849 round_trippers.go:443] GET https://blabla.net:443/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem 503 Service Unavailable in 30682 milliseconds
I0730 14:44:30.974048   23849 round_trippers.go:449] Response Headers:
I0730 14:44:30.974069   23849 round_trippers.go:452]     Content-Length: 191
I0730 14:44:30.974079   23849 round_trippers.go:452]     Date: Fri, 30 Jul 2021 11:44:30 GMT
I0730 14:44:30.974089   23849 round_trippers.go:452]     Cache-Control: no-cache, private
I0730 14:44:30.974098   23849 round_trippers.go:452]     Content-Type: application/json
I0730 14:44:30.974161   23849 request.go:968] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"error trying to reach service: dial tcp 10.2.193.7:8080: i/o timeout","reason":"ServiceUnavailable","code":503}
error: cannot fetch certificate: error trying to reach service: dial tcp 10.2.193.7:8080: i/o timeout

No network rules, no custom controller configuration, +just fresh cluster.

Workaround: using kube proxy and offline sealing

@sitilge
Copy link

sitilge commented Aug 23, 2021

Same here with Linode LKE.

Spent quite some time to understand the issue. Can this https://github.com/bitnami-labs/sealed-secrets/blob/main/docs/GKE.md be bumped to the main Readme.md?

@flurdy
Copy link

flurdy commented Oct 23, 2021

Same timeout with DigitalOcean in October 2021. Kubeseal v0.16.0, K8s 1.21.3. Helm chart 1.16.1, using Flux v2.

kube proxy workaround worked (different namespace due to Flux)

@exocode
Copy link

exocode commented Dec 22, 2021

Same with Civo

@exocode
Copy link

exocode commented Dec 22, 2021

Could it be, that (in my case ArgoCD) is somehow clashing with the port 8080?

@github-actions github-actions bot added the Stale label Jan 28, 2022
@juan131 juan131 added backlog Issues/PRs that will be included in the project roadmap bug and removed Stale labels Feb 3, 2022
@juan131 juan131 added this to Inbox in Sealed Secrets via automation Feb 3, 2022
@bitnami-labs bitnami-labs deleted a comment from github-actions bot Feb 3, 2022
@renxunsaky
Copy link

For anyone who referred to this issue, I have had the same problem with EKS. The reason is that the Cluster API server needs to call the controller. However, the request was blocked by the security group of the node where the pod is deployed.

So, you need to allow the port 8080 for the entry rule of the node's security group.

I hope that could help someone.

@cosad3s
Copy link

cosad3s commented Jul 10, 2023

Thanks @renxunsaky !

For people not familiar with manual customisation of security groups:

  • In AWS console, search for "Security Groups"
  • Find your Security Group dedicated for node (For me, it was "clustername-node")
  • Check the box on the left, then "Actions" > "Edit inbound rules".
  • Clic "Add Rule". Type "Custom TCP", Port range "8080", Source "Custom", select the security related to Cluster API. Add a description.
  • Save rule.

Here we go

@mcflis
Copy link

mcflis commented Jul 21, 2023

for anyone having trouble to obtain the public key using kubeseal while having the sealed-secrets-controller deployed in the flux-system namespace, here's the relevant part from the official flux documentation:

Arbitrary clients cannot connect to any service in the flux-system namespace, as a precaution to limit the potential for new features to create and expose attack surfaces within the cluster. A set of default network policies restricts communication in and out of the flux-system namespace according to three rules:

  1. allow-scraping permits clients from any namespace to reach port 8080 on any pods in the cluster, for the purpose of collecting metrics. (This can be further restricted when the metrics collectors are known to be deployed in a specific namespace.)

Source: https://fluxcd.io/flux/flux-e2e/#fluxs-default-configuration-for-networkpolicy

My solution:
Apart from using port forwarding or ingress configuration, you can also leverage kubectl get secret:

kubectl get secret \
  --namespace flux-system \
  --selector sealedsecrets.bitnami.com/sealed-secrets-key=active \
  --output jsonpath='{.items[0].data.tls\.crt}' \
| base64 -d

@AdrianAntunez
Copy link

I've been having the same issue in a k8s onprem cluster where apiservers are not located in normal CNI network but in host network.

I workarounded it by exposing the public certificate via an ingress (using sealed-secrets helm chart). Overriden parameters:

ingress.enabled: true
ingress.ingressClassName: internal
ingress.hostname: sealed-secrets.<mydomain>
ingress.pathType: Prefix
ingress.path: /v1/cert.pem

With this, I'm able to get the public certificate:

❯ curl https://sealed-secrets.<mydomain>/v1/cert.pem
-----BEGIN CERTIFICATE-----
MIIEzDCCArSgAwIBAgIQXY3wuRemrBzlQrAWVoJANDANBgkqhkiG9w0BAQsseDAA
[...]

In order to use the kubeseal with this exposed certificate, you can simply use:

kubeseal --cert="https://sealed-secrets.<mydomain>/v1/cert.pem"

For example:

k create secret generic -n adriantunez-secrets php-secret --from-literal=SECRET='mysupersecret' --dry-run=client -o yaml | kubeseal --cert="https://sealed-secrets.<mydomain>/v1/cert.pem" -o yaml

It works like a charm :)

@douglaz
Copy link

douglaz commented May 9, 2024

Thanks @renxunsaky !

For people not familiar with manual customisation of security groups:

  • In AWS console, search for "Security Groups"
  • Find your Security Group dedicated for node (For me, it was "clustername-node")
  • Check the box on the left, then "Actions" > "Edit inbound rules".
  • Clic "Add Rule". Type "Custom TCP", Port range "8080", Source "Custom", select the security related to Cluster API. Add a description.
  • Save rule.

Here we go

If you are using terraform, add this to the eks module:

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
(...)
  node_security_group_additional_rules = {
    # kubeseal requires this
    ingress_cluster_8080 = {
      description                   = "Cluster API to 8080"
      protocol                      = "tcp"
      from_port                     = 8080
      to_port                       = 8080
      type                          = "ingress"
      source_cluster_security_group = true
    }
  }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog Issues/PRs that will be included in the project roadmap bug
Projects
Sealed Secrets
  
Inbox
Development

No branches or pull requests