Gravity-based Teleport kubeconfig has cluster name rather than address

I have a Gravity 6.0.1 cluster (using OSS Gravity) that I logged in with using tsh. My login looks like this:

$ ./tsh --insecure login --proxy= --user admin

Login is successful, and I can SSH successfully. However, when I go and try to use kubectl, I get this:

$ kubectl get pods
Unable to connect to the server: dial tcp: lookup pricelessblackwell3635 on no such host server: https://pricelessblackwell3635:3026

pricelessblackwell3635 is the autogenerated name of the cluster. Indeed, if I go and look at the kubeconfig:

$ kubectl config view
apiVersion: v1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://pricelessblackwell3635:3026
  name: pricelessblackwell3635
- context:
    cluster: pricelessblackwell3635
    user: pricelessblackwell3635
  name: pricelessblackwell3635
current-context: pricelessblackwell3635
kind: Config
preferences: {}
- name: pricelessblackwell3635
    client-certificate-data: REDACTED
    client-key-data: REDACTED

So question: why does it generate a server url of server: https://pricelessblackwell3635:3026 instead of using the IP (e.g. what I have in the proxy)? For example, the example tsh command in the Gravity UI properly has the IP (which is in this case). If I go and edit the kubeconfig to manually change that server URL to have the IP instead, everything does actually work.

Is this some misconfiguration on my end? Something else? I’d love to just have this work.

In a Teleport cluster, this address comes from what’s set for public_addr under the kubernetes section of the Teleport proxy config.

    enabled: yes
    public_addr: ['']

Maybe @knisbet or @r0mant could advise on how this is set within Gravity?

@knisbet @r0mant any ideas? From the logs I think it may be done in code somewhere but not sure. If there’s a configuration to override it, happy to do that too.

Hello @itay!

There’s AuthGateway Gravity resource that allows to override certain parts of the embedded Teleport configuration. I think what you’re looking for is kubernetes_public_addr.

Let me know if that works.


@r0manti I will give that a try.

That said, any idea why the default out of the box configuration is wrong? It seems like having it return the cluster name as the public address will never work on other machines?

Hi @itay! Apologies about the delayed response.

Yeah, I think the reason it was originally implemented this way is because it is quite common for users to give their clusters the names equal to the actual “domain” names they may be exposed at (for example, gravity install, however I kind of agree this might be a little bit too far-fetching assumption, especially in bare-metal mode.

I have now made a fix to not include cluster name in the list of principals by default so initially it only contains the node’s advertise IP which means tsh and kubectl should work for on-prem cluster out of the box (after tsh login). The fix is available in 6.0.3.

Hope this helps,

Excellent - I just noticed it this morning. I see the PR is pending for 6.1.x as well, so I’ll take it at that version!