I dug deeper into documentation and made few changes. teleport.yaml looks as below after few changes -
teleport:
data_dir: /var/lib/teleport
log:
output: stderr
severity: INFO
auth_service:
enabled: "yes"
cluster_name: "stage-teleport-cluster"
listen_addr: 0.0.0.0:3025
public_addr: stage.advasmart.in:3025
tokens:
- proxy,node,app:REDACTED
authentication:
# default authentication type. possible values are 'local' and 'github' for OSS
# and 'oidc', 'saml' and 'false' for Enterprise.
type: local
# second_factor can be off, otp, or u2f
second_factor: off
ssh_service:
enabled: "yes"
labels:
env: staging
app_service:
enabled: "yes"
debug_app: true
proxy_service:
enabled: "yes"
listen_addr: 0.0.0.0:3023
web_listen_addr: 0.0.0.0:3080
tunnel_listen_addr: 0.0.0.0:3024
public_addr: stage.advasmart.in:3080
tunnel_public_addr: stage.advasmart.in:3024
https_keypairs:
- key_file: '/etc/letsencrypt/live/stage.advasmart.in/privkey.pem'
cert_file: '/etc/letsencrypt/live/stage.advasmart.in/fullchain.pem'
Created teleport users on stage.advasmart.in as below -
root@e2e-39-168:~# tctl users ls
User Allowed logins
----- -----------------
suraj suraj,root,ubuntu
Then started teleport as below -
root@e2e-39-168:~# sudo teleport start --roles=node,auth,proxy
Logs -
WARN [PROXY:1:C] Failed to set tombstone: database is closed cache/cache.go:655
WARN [PROXY:1] Re-init the watcher on error: grpc: the client connection is closing. services/proxywatcher.go:189
WARN [NODE:2:CA] Re-init the cache on error: watcher closed. cache/cache.go:627
WARN [PROXY:2] Re-init the watcher on error: watcher closed. services/proxywatcher.go:189
WARN [PROXY:2:C] Re-init the cache on error: watcher closed. cache/cache.go:627
WARN [PROXY:1] Re-init the watcher on error: grpc: the client connection is closing. services/proxywatcher.go:189
INFO [PROXY:SER] Shutting down gracefully. service/service.go:2624
WARN Failed to sync reverse tunnels: {"message":"cache is closed"}. reversetunnel/rc_manager.go:138
INFO [AUTH:1] Shutting down gracefully. service/service.go:1354
WARN [REVERSE:S] Re-init the cache on error: watcher closed. cache/cache.go:627
WARN [PROXY:2] Re-init the watcher on error: cache is closed. services/proxywatcher.go:189
WARN [NODE:2:CA] Re-init the cache on error: cache is closed. cache/cache.go:627
WARN [PROXY:2:C] Re-init the cache on error: cache is closed. cache/cache.go:627
INFO [PROXY:SER] Exited. service/service.go:2459
INFO [WEB] Closing session cache. web/sessions.go:357
WARN [REVERSE:S] Re-init the cache on error: {"message":"cache is closed"}. cache/cache.go:627
INFO [KEYGEN] Stopping key precomputation routine. native/native.go:144
INFO [WEB] Closing session cache. web/sessions.go:357
INFO [PROXY:SER] Exited. service/service.go:2645
INFO [PROC] Waiting for services: [auth.tls auth.shutdown] to finish. service/signals.go:43
ERRO [AUTH] Failed to perform cert rotation check: cache is closed. auth/auth.go:279
INFO [PROC] Waiting for services: [auth.tls auth.shutdown] to finish. service/signals.go:43
ERRO [AUTH] Failed to perform cert rotation check: cache is closed. auth/auth.go:279
INFO [PROC] Waiting for services: [auth.tls auth.shutdown] to finish. service/signals.go:43
INFO [PROC] Waiting for services: [auth.tls auth.shutdown] to finish. service/signals.go:43
ERRO [AUTH] Failed to perform cert rotation check: cache is closed. auth/auth.go:279
INFO [PROC] Waiting for services: [auth.tls auth.shutdown] to finish. service/signals.go:43
ERRO [AUTH] Failed to perform cert rotation check: cache is closed. auth/auth.go:279
INFO [PROC] Waiting for services: [auth.tls auth.shutdown] to finish. service/signals.go:43
INFO [AUTH:1] Exited. service/service.go:1361
WARN [AUTH:1] TLS server exited with error: http: Server closed. service/service.go:1240
INFO [PROC] The old service was successfully shut down gracefully. service/service.go:530
WARN [NODE:BEAT] Keep alive has failed: cache is closed. srv/heartbeat.go:461
WARN [NODE:BEAT] Heartbeat failed keep alive channel closed. srv/heartbeat.go:256
INFO [AUDIT] user.login code:T1000I ei:0 event:user.login method:local success:true time:2020-12-18T14:40:36.445Z uid:224c0c17-5de0-42e7-bd51-712120c6bb58 user:suraj events/emitter.go:318
I am still able to login into the admin panel even after getting above errors. So, tried creating the identity on my laptop as below -
suraj@suraj:~$ tsh ssh --proxy=stage.advasmart.in --user=suraj root@stage.advasmart.in
Enter password for Teleport user suraj:
Enter your OTP token:
180447
error: access denied to root connecting to stage.advasmart.in on cluster stage-teleport-cluster
I want to test if I can connect to node “stage.advasmart.in” from my laptop through proxy “stage.advasmart.in” when auth is running on “stage.advasmart.in”
Any idea why I am denied access as root from my laptop when using teleport client “tsh”?
Regards,
Suraj